00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1011 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3678 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:04.555 The recommended git tool is: git 00:00:04.555 using credential 00000000-0000-0000-0000-000000000002 00:00:04.558 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:04.572 Fetching changes from the remote Git repository 00:00:04.576 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:04.591 Using shallow fetch with depth 1 00:00:04.591 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.592 > git --version # timeout=10 00:00:04.604 > git --version # 'git version 2.39.2' 00:00:04.604 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:04.618 Setting http proxy: proxy-dmz.intel.com:911 00:00:04.618 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.493 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.506 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.521 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.521 > git config core.sparsecheckout # timeout=10 00:00:10.535 > git read-tree -mu HEAD # timeout=10 00:00:10.555 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.579 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.580 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.679 [Pipeline] Start of Pipeline 00:00:10.693 [Pipeline] library 00:00:10.695 Loading library shm_lib@master 00:00:10.695 Library shm_lib@master is cached. Copying from home. 00:00:10.709 [Pipeline] node 00:00:10.716 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:10.718 [Pipeline] { 00:00:10.727 [Pipeline] catchError 00:00:10.729 [Pipeline] { 00:00:10.739 [Pipeline] wrap 00:00:10.746 [Pipeline] { 00:00:10.752 [Pipeline] stage 00:00:10.754 [Pipeline] { (Prologue) 00:00:10.770 [Pipeline] echo 00:00:10.771 Node: VM-host-SM0 00:00:10.777 [Pipeline] cleanWs 00:00:10.785 [WS-CLEANUP] Deleting project workspace... 00:00:10.785 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.791 [WS-CLEANUP] done 00:00:11.047 [Pipeline] setCustomBuildProperty 00:00:11.139 [Pipeline] httpRequest 00:00:11.494 [Pipeline] echo 00:00:11.495 Sorcerer 10.211.164.20 is alive 00:00:11.504 [Pipeline] retry 00:00:11.506 [Pipeline] { 00:00:11.520 [Pipeline] httpRequest 00:00:11.525 HttpMethod: GET 00:00:11.526 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.526 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.530 Response Code: HTTP/1.1 200 OK 00:00:11.531 Success: Status code 200 is in the accepted range: 200,404 00:00:11.531 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.737 [Pipeline] } 00:00:25.755 [Pipeline] // retry 00:00:25.763 [Pipeline] sh 00:00:26.047 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.065 [Pipeline] httpRequest 00:00:26.599 [Pipeline] echo 00:00:26.601 Sorcerer 10.211.164.20 is alive 00:00:26.612 [Pipeline] retry 00:00:26.614 [Pipeline] { 00:00:26.629 [Pipeline] httpRequest 00:00:26.634 HttpMethod: GET 00:00:26.635 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:26.635 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:26.640 Response Code: HTTP/1.1 200 OK 00:00:26.641 Success: Status code 200 is in the accepted range: 200,404 00:00:26.642 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:04:13.074 [Pipeline] } 00:04:13.093 [Pipeline] // retry 00:04:13.101 [Pipeline] sh 00:04:13.381 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:04:15.923 [Pipeline] sh 00:04:16.222 + git -C spdk log --oneline -n5 00:04:16.222 c13c99a5e test: Various fixes for Fedora40 00:04:16.222 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:04:16.222 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:04:16.222 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:04:16.222 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:04:16.240 [Pipeline] withCredentials 00:04:16.249 > git --version # timeout=10 00:04:16.260 > git --version # 'git version 2.39.2' 00:04:16.272 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:04:16.274 [Pipeline] { 00:04:16.285 [Pipeline] retry 00:04:16.286 [Pipeline] { 00:04:16.302 [Pipeline] sh 00:04:16.583 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:04:16.594 [Pipeline] } 00:04:16.610 [Pipeline] // retry 00:04:16.615 [Pipeline] } 00:04:16.634 [Pipeline] // withCredentials 00:04:16.645 [Pipeline] httpRequest 00:04:16.956 [Pipeline] echo 00:04:16.957 Sorcerer 10.211.164.20 is alive 00:04:16.967 [Pipeline] retry 00:04:16.968 [Pipeline] { 00:04:16.980 [Pipeline] httpRequest 00:04:16.984 HttpMethod: GET 00:04:16.985 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:16.985 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:16.987 Response Code: HTTP/1.1 200 OK 00:04:16.987 Success: Status code 200 is in the accepted range: 200,404 00:04:16.988 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:24.634 [Pipeline] } 00:04:24.651 [Pipeline] // retry 00:04:24.658 [Pipeline] sh 00:04:24.936 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:26.852 [Pipeline] sh 00:04:27.137 + git -C dpdk log --oneline -n5 00:04:27.137 caf0f5d395 version: 22.11.4 00:04:27.137 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:27.137 dc9c799c7d vhost: fix missing spinlock unlock 00:04:27.137 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:27.137 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:27.155 [Pipeline] writeFile 00:04:27.170 [Pipeline] sh 00:04:27.451 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:27.463 [Pipeline] sh 00:04:27.742 + cat autorun-spdk.conf 00:04:27.742 SPDK_TEST_UNITTEST=1 00:04:27.742 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:27.742 SPDK_TEST_NVME=1 00:04:27.742 SPDK_TEST_BLOCKDEV=1 00:04:27.742 SPDK_RUN_ASAN=1 00:04:27.742 SPDK_RUN_UBSAN=1 00:04:27.742 SPDK_TEST_RAID5=1 00:04:27.742 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:27.742 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:27.742 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:27.748 RUN_NIGHTLY=1 00:04:27.750 [Pipeline] } 00:04:27.765 [Pipeline] // stage 00:04:27.780 [Pipeline] stage 00:04:27.782 [Pipeline] { (Run VM) 00:04:27.796 [Pipeline] sh 00:04:28.076 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:28.076 + echo 'Start stage prepare_nvme.sh' 00:04:28.076 Start stage prepare_nvme.sh 00:04:28.076 + [[ -n 6 ]] 00:04:28.076 + disk_prefix=ex6 00:04:28.076 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:04:28.076 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:04:28.076 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:04:28.076 ++ SPDK_TEST_UNITTEST=1 00:04:28.076 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:28.076 ++ SPDK_TEST_NVME=1 00:04:28.076 ++ SPDK_TEST_BLOCKDEV=1 00:04:28.076 ++ SPDK_RUN_ASAN=1 00:04:28.076 ++ SPDK_RUN_UBSAN=1 00:04:28.076 ++ SPDK_TEST_RAID5=1 00:04:28.076 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:28.076 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:28.076 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:28.076 ++ RUN_NIGHTLY=1 00:04:28.076 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:04:28.076 + nvme_files=() 00:04:28.076 + declare -A nvme_files 00:04:28.076 + backend_dir=/var/lib/libvirt/images/backends 00:04:28.076 + nvme_files['nvme.img']=5G 00:04:28.076 + nvme_files['nvme-cmb.img']=5G 00:04:28.076 + nvme_files['nvme-multi0.img']=4G 00:04:28.076 + nvme_files['nvme-multi1.img']=4G 00:04:28.076 + nvme_files['nvme-multi2.img']=4G 00:04:28.076 + nvme_files['nvme-openstack.img']=8G 00:04:28.076 + nvme_files['nvme-zns.img']=5G 00:04:28.076 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:28.076 + (( SPDK_TEST_FTL == 1 )) 00:04:28.076 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:28.076 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:04:28.076 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:04:28.076 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:04:28.076 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:04:28.076 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:04:28.076 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:04:28.076 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:28.076 + for nvme in "${!nvme_files[@]}" 00:04:28.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:04:28.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:28.334 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:04:28.334 + echo 'End stage prepare_nvme.sh' 00:04:28.334 End stage prepare_nvme.sh 00:04:28.347 [Pipeline] sh 00:04:28.627 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:28.627 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2204 00:04:28.627 00:04:28.627 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:04:28.627 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:04:28.627 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:04:28.627 HELP=0 00:04:28.627 DRY_RUN=0 00:04:28.627 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:04:28.627 NVME_DISKS_TYPE=nvme, 00:04:28.627 NVME_AUTO_CREATE=0 00:04:28.627 NVME_DISKS_NAMESPACES=, 00:04:28.627 NVME_CMB=, 00:04:28.627 NVME_PMR=, 00:04:28.627 NVME_ZNS=, 00:04:28.627 NVME_MS=, 00:04:28.627 NVME_FDP=, 00:04:28.627 SPDK_VAGRANT_DISTRO=ubuntu2204 00:04:28.627 SPDK_VAGRANT_VMCPU=10 00:04:28.627 SPDK_VAGRANT_VMRAM=12288 00:04:28.627 SPDK_VAGRANT_PROVIDER=libvirt 00:04:28.627 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:28.627 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:28.627 SPDK_OPENSTACK_NETWORK=0 00:04:28.627 VAGRANT_PACKAGE_BOX=0 00:04:28.627 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:28.627 FORCE_DISTRO=true 00:04:28.627 VAGRANT_BOX_VERSION= 00:04:28.627 EXTRA_VAGRANTFILES= 00:04:28.627 NIC_MODEL=e1000 00:04:28.627 00:04:28.627 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:04:28.627 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:04:31.943 Bringing machine 'default' up with 'libvirt' provider... 00:04:32.511 ==> default: Creating image (snapshot of base box volume). 00:04:32.511 ==> default: Creating domain with the following settings... 00:04:32.511 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1732880917_87698f39ef0d7044bec4 00:04:32.511 ==> default: -- Domain type: kvm 00:04:32.511 ==> default: -- Cpus: 10 00:04:32.511 ==> default: -- Feature: acpi 00:04:32.511 ==> default: -- Feature: apic 00:04:32.511 ==> default: -- Feature: pae 00:04:32.511 ==> default: -- Memory: 12288M 00:04:32.511 ==> default: -- Memory Backing: hugepages: 00:04:32.511 ==> default: -- Management MAC: 00:04:32.511 ==> default: -- Loader: 00:04:32.511 ==> default: -- Nvram: 00:04:32.511 ==> default: -- Base box: spdk/ubuntu2204 00:04:32.511 ==> default: -- Storage pool: default 00:04:32.511 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1732880917_87698f39ef0d7044bec4.img (20G) 00:04:32.511 ==> default: -- Volume Cache: default 00:04:32.511 ==> default: -- Kernel: 00:04:32.511 ==> default: -- Initrd: 00:04:32.511 ==> default: -- Graphics Type: vnc 00:04:32.511 ==> default: -- Graphics Port: -1 00:04:32.511 ==> default: -- Graphics IP: 127.0.0.1 00:04:32.511 ==> default: -- Graphics Password: Not defined 00:04:32.511 ==> default: -- Video Type: cirrus 00:04:32.511 ==> default: -- Video VRAM: 9216 00:04:32.511 ==> default: -- Sound Type: 00:04:32.511 ==> default: -- Keymap: en-us 00:04:32.511 ==> default: -- TPM Path: 00:04:32.511 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:32.511 ==> default: -- Command line args: 00:04:32.511 ==> default: -> value=-device, 00:04:32.511 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:04:32.511 ==> default: -> value=-drive, 00:04:32.511 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:04:32.511 ==> default: -> value=-device, 00:04:32.511 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:32.769 ==> default: Creating shared folders metadata... 00:04:32.769 ==> default: Starting domain. 00:04:35.302 ==> default: Waiting for domain to get an IP address... 00:04:45.282 ==> default: Waiting for SSH to become available... 00:04:46.218 ==> default: Configuring and enabling network interfaces... 00:04:51.492 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:55.683 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:04:59.880 ==> default: Mounting SSHFS shared folder... 00:05:01.270 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:05:01.270 ==> default: Checking Mount.. 00:05:01.845 ==> default: Folder Successfully Mounted! 00:05:01.845 ==> default: Running provisioner: file... 00:05:02.107 default: ~/.gitconfig => .gitconfig 00:05:02.365 00:05:02.365 SUCCESS! 00:05:02.365 00:05:02.365 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:05:02.365 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:02.365 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:05:02.365 00:05:02.373 [Pipeline] } 00:05:02.389 [Pipeline] // stage 00:05:02.399 [Pipeline] dir 00:05:02.399 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:05:02.401 [Pipeline] { 00:05:02.413 [Pipeline] catchError 00:05:02.414 [Pipeline] { 00:05:02.427 [Pipeline] sh 00:05:02.706 + vagrant ssh-config --host vagrant 00:05:02.706 + sed -ne /^Host/,$p 00:05:02.706 + tee ssh_conf 00:05:06.891 Host vagrant 00:05:06.891 HostName 192.168.121.216 00:05:06.891 User vagrant 00:05:06.891 Port 22 00:05:06.891 UserKnownHostsFile /dev/null 00:05:06.891 StrictHostKeyChecking no 00:05:06.891 PasswordAuthentication no 00:05:06.891 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:05:06.891 IdentitiesOnly yes 00:05:06.891 LogLevel FATAL 00:05:06.891 ForwardAgent yes 00:05:06.891 ForwardX11 yes 00:05:06.891 00:05:06.905 [Pipeline] withEnv 00:05:06.908 [Pipeline] { 00:05:06.928 [Pipeline] sh 00:05:07.212 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:07.212 source /etc/os-release 00:05:07.212 [[ -e /image.version ]] && img=$(< /image.version) 00:05:07.212 # Minimal, systemd-like check. 00:05:07.212 if [[ -e /.dockerenv ]]; then 00:05:07.212 # Clear garbage from the node's name: 00:05:07.212 # agt-er_autotest_547-896 -> autotest_547-896 00:05:07.212 # $HOSTNAME is the actual container id 00:05:07.212 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:07.212 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:07.212 # We can assume this is a mount from a host where container is running, 00:05:07.212 # so fetch its hostname to easily identify the target swarm worker. 00:05:07.212 container="$(< /etc/hostname) ($agent)" 00:05:07.212 else 00:05:07.212 # Fallback 00:05:07.212 container=$agent 00:05:07.212 fi 00:05:07.212 fi 00:05:07.212 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:07.212 00:05:07.478 [Pipeline] } 00:05:07.488 [Pipeline] // withEnv 00:05:07.493 [Pipeline] setCustomBuildProperty 00:05:07.503 [Pipeline] stage 00:05:07.505 [Pipeline] { (Tests) 00:05:07.517 [Pipeline] sh 00:05:07.790 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:08.060 [Pipeline] sh 00:05:08.339 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:08.610 [Pipeline] timeout 00:05:08.610 Timeout set to expire in 1 hr 30 min 00:05:08.612 [Pipeline] { 00:05:08.627 [Pipeline] sh 00:05:08.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:09.473 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:05:09.485 [Pipeline] sh 00:05:09.772 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:10.046 [Pipeline] sh 00:05:10.324 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:10.596 [Pipeline] sh 00:05:10.875 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:05:11.133 ++ readlink -f spdk_repo 00:05:11.133 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:11.133 + [[ -n /home/vagrant/spdk_repo ]] 00:05:11.133 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:11.133 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:11.133 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:11.133 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:11.133 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:11.133 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:05:11.133 + cd /home/vagrant/spdk_repo 00:05:11.133 + source /etc/os-release 00:05:11.133 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:05:11.133 ++ NAME=Ubuntu 00:05:11.133 ++ VERSION_ID=22.04 00:05:11.133 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:05:11.133 ++ VERSION_CODENAME=jammy 00:05:11.133 ++ ID=ubuntu 00:05:11.133 ++ ID_LIKE=debian 00:05:11.133 ++ HOME_URL=https://www.ubuntu.com/ 00:05:11.133 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:05:11.133 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:05:11.133 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:05:11.133 ++ UBUNTU_CODENAME=jammy 00:05:11.133 + uname -a 00:05:11.133 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:05:11.133 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.133 Hugepages 00:05:11.133 node hugesize free / total 00:05:11.133 node0 1048576kB 0 / 0 00:05:11.133 node0 2048kB 0 / 0 00:05:11.133 00:05:11.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.133 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:11.391 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:11.391 + rm -f /tmp/spdk-ld-path 00:05:11.391 + source autorun-spdk.conf 00:05:11.391 ++ SPDK_TEST_UNITTEST=1 00:05:11.391 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:11.391 ++ SPDK_TEST_NVME=1 00:05:11.391 ++ SPDK_TEST_BLOCKDEV=1 00:05:11.391 ++ SPDK_RUN_ASAN=1 00:05:11.391 ++ SPDK_RUN_UBSAN=1 00:05:11.391 ++ SPDK_TEST_RAID5=1 00:05:11.391 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:11.391 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:11.391 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:11.391 ++ RUN_NIGHTLY=1 00:05:11.391 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:11.391 + [[ -n '' ]] 00:05:11.391 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:11.391 + for M in /var/spdk/build-*-manifest.txt 00:05:11.391 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:11.391 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:11.391 + for M in /var/spdk/build-*-manifest.txt 00:05:11.391 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:11.391 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:11.391 ++ uname 00:05:11.391 + [[ Linux == \L\i\n\u\x ]] 00:05:11.391 + sudo dmesg -T 00:05:11.391 + sudo dmesg --clear 00:05:11.391 + dmesg_pid=2280 00:05:11.391 + [[ Ubuntu == FreeBSD ]] 00:05:11.391 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:11.391 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:11.391 + sudo dmesg -Tw 00:05:11.392 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:11.392 + [[ -x /usr/src/fio-static/fio ]] 00:05:11.392 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:11.392 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:11.392 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:11.392 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:05:11.392 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:11.392 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:11.392 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:11.392 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:11.392 Test configuration: 00:05:11.392 SPDK_TEST_UNITTEST=1 00:05:11.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:11.392 SPDK_TEST_NVME=1 00:05:11.392 SPDK_TEST_BLOCKDEV=1 00:05:11.392 SPDK_RUN_ASAN=1 00:05:11.392 SPDK_RUN_UBSAN=1 00:05:11.392 SPDK_TEST_RAID5=1 00:05:11.392 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:11.392 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:11.392 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:11.392 RUN_NIGHTLY=1 11:49:16 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:05:11.392 11:49:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.392 11:49:16 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:11.392 11:49:16 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.392 11:49:16 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.392 11:49:16 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:11.392 11:49:16 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:11.392 11:49:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:11.392 11:49:16 -- paths/export.sh@5 -- $ export PATH 00:05:11.392 11:49:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:11.392 11:49:16 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:11.392 11:49:16 -- common/autobuild_common.sh@440 -- $ date +%s 00:05:11.392 11:49:16 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732880956.XXXXXX 00:05:11.392 11:49:16 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732880956.f9x5ts 00:05:11.392 11:49:16 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:05:11.392 11:49:16 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:05:11.392 11:49:16 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:11.392 11:49:16 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:05:11.392 11:49:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:11.392 11:49:16 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:11.392 11:49:16 -- common/autobuild_common.sh@456 -- $ get_config_params 00:05:11.392 11:49:16 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:05:11.392 11:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.392 11:49:16 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:05:11.392 11:49:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:11.392 11:49:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:11.392 11:49:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:11.392 11:49:16 -- spdk/autobuild.sh@16 -- $ date -u 00:05:11.392 Fri Nov 29 11:49:16 UTC 2024 00:05:11.392 11:49:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:11.392 LTS-67-gc13c99a5e 00:05:11.392 11:49:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:11.392 11:49:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:11.392 11:49:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:05:11.392 11:49:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:11.392 11:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.392 ************************************ 00:05:11.392 START TEST asan 00:05:11.392 ************************************ 00:05:11.392 using asan 00:05:11.392 11:49:16 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:05:11.392 00:05:11.392 real 0m0.001s 00:05:11.392 user 0m0.000s 00:05:11.392 sys 0m0.001s 00:05:11.392 11:49:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:11.392 ************************************ 00:05:11.392 END TEST asan 00:05:11.392 ************************************ 00:05:11.392 11:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.650 11:49:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:11.650 11:49:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:11.650 11:49:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:05:11.650 11:49:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:11.650 11:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.650 ************************************ 00:05:11.650 START TEST ubsan 00:05:11.650 ************************************ 00:05:11.650 using ubsan 00:05:11.650 11:49:16 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:05:11.650 00:05:11.650 real 0m0.000s 00:05:11.650 user 0m0.000s 00:05:11.650 sys 0m0.000s 00:05:11.650 11:49:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:11.650 11:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.650 ************************************ 00:05:11.650 END TEST ubsan 00:05:11.650 ************************************ 00:05:11.650 11:49:16 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:05:11.650 11:49:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:05:11.650 11:49:16 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:05:11.650 11:49:16 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:05:11.650 11:49:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:11.650 11:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.650 ************************************ 00:05:11.650 START TEST build_native_dpdk 00:05:11.650 ************************************ 00:05:11.650 11:49:16 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:05:11.650 11:49:16 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:05:11.650 11:49:16 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:05:11.650 11:49:16 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:05:11.650 11:49:16 -- common/autobuild_common.sh@51 -- $ local compiler 00:05:11.650 11:49:16 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:05:11.650 11:49:16 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:05:11.650 11:49:16 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:05:11.650 11:49:16 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:05:11.650 11:49:16 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:05:11.650 11:49:16 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:05:11.650 11:49:16 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:05:11.650 11:49:16 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:05:11.650 11:49:16 -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:05:11.650 11:49:16 -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:05:11.651 11:49:16 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:05:11.651 11:49:16 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:11.651 11:49:16 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:05:11.651 11:49:16 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:05:11.651 11:49:16 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:05:11.651 caf0f5d395 version: 22.11.4 00:05:11.651 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:05:11.651 dc9c799c7d vhost: fix missing spinlock unlock 00:05:11.651 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:05:11.651 6ef77f2a5e net/gve: fix RX buffer size alignment 00:05:11.651 11:49:16 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:05:11.651 11:49:16 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:05:11.651 11:49:16 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:05:11.651 11:49:16 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:05:11.651 11:49:16 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:05:11.651 11:49:16 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:05:11.651 11:49:16 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:05:11.651 11:49:16 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:05:11.651 11:49:16 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:05:11.651 11:49:16 -- common/autobuild_common.sh@168 -- $ uname -s 00:05:11.651 11:49:16 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:05:11.651 11:49:16 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:05:11.651 11:49:16 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:05:11.651 11:49:16 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:05:11.651 11:49:16 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:05:11.651 11:49:16 -- scripts/common.sh@335 -- $ IFS=.-: 00:05:11.651 11:49:16 -- scripts/common.sh@335 -- $ read -ra ver1 00:05:11.651 11:49:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:05:11.651 11:49:16 -- scripts/common.sh@336 -- $ read -ra ver2 00:05:11.651 11:49:16 -- scripts/common.sh@337 -- $ local 'op=<' 00:05:11.651 11:49:16 -- scripts/common.sh@339 -- $ ver1_l=3 00:05:11.651 11:49:16 -- scripts/common.sh@340 -- $ ver2_l=3 00:05:11.651 11:49:16 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:05:11.651 11:49:16 -- scripts/common.sh@343 -- $ case "$op" in 00:05:11.651 11:49:16 -- scripts/common.sh@344 -- $ : 1 00:05:11.651 11:49:16 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:05:11.651 11:49:16 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.651 11:49:16 -- scripts/common.sh@364 -- $ decimal 22 00:05:11.651 11:49:16 -- scripts/common.sh@352 -- $ local d=22 00:05:11.651 11:49:16 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:05:11.651 11:49:16 -- scripts/common.sh@354 -- $ echo 22 00:05:11.651 11:49:16 -- scripts/common.sh@364 -- $ ver1[v]=22 00:05:11.651 11:49:16 -- scripts/common.sh@365 -- $ decimal 21 00:05:11.651 11:49:16 -- scripts/common.sh@352 -- $ local d=21 00:05:11.651 11:49:16 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:05:11.651 11:49:16 -- scripts/common.sh@354 -- $ echo 21 00:05:11.651 11:49:16 -- scripts/common.sh@365 -- $ ver2[v]=21 00:05:11.651 11:49:16 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:05:11.651 11:49:16 -- scripts/common.sh@366 -- $ return 1 00:05:11.651 11:49:16 -- common/autobuild_common.sh@173 -- $ patch -p1 00:05:11.651 patching file config/rte_config.h 00:05:11.651 Hunk #1 succeeded at 60 (offset 1 line). 00:05:11.651 11:49:16 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:05:11.651 11:49:16 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:05:11.651 11:49:16 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:05:11.651 11:49:16 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:05:11.651 11:49:16 -- scripts/common.sh@335 -- $ IFS=.-: 00:05:11.651 11:49:16 -- scripts/common.sh@335 -- $ read -ra ver1 00:05:11.651 11:49:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:05:11.651 11:49:16 -- scripts/common.sh@336 -- $ read -ra ver2 00:05:11.651 11:49:16 -- scripts/common.sh@337 -- $ local 'op=<' 00:05:11.651 11:49:16 -- scripts/common.sh@339 -- $ ver1_l=3 00:05:11.651 11:49:16 -- scripts/common.sh@340 -- $ ver2_l=3 00:05:11.651 11:49:16 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:05:11.651 11:49:16 -- scripts/common.sh@343 -- $ case "$op" in 00:05:11.651 11:49:16 -- scripts/common.sh@344 -- $ : 1 00:05:11.651 11:49:16 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:05:11.651 11:49:16 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.651 11:49:16 -- scripts/common.sh@364 -- $ decimal 22 00:05:11.651 11:49:16 -- scripts/common.sh@352 -- $ local d=22 00:05:11.651 11:49:16 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:05:11.651 11:49:16 -- scripts/common.sh@354 -- $ echo 22 00:05:11.651 11:49:16 -- scripts/common.sh@364 -- $ ver1[v]=22 00:05:11.651 11:49:16 -- scripts/common.sh@365 -- $ decimal 24 00:05:11.651 11:49:16 -- scripts/common.sh@352 -- $ local d=24 00:05:11.651 11:49:16 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:11.651 11:49:16 -- scripts/common.sh@354 -- $ echo 24 00:05:11.651 11:49:16 -- scripts/common.sh@365 -- $ ver2[v]=24 00:05:11.651 11:49:16 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:05:11.651 11:49:16 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:05:11.651 11:49:16 -- scripts/common.sh@367 -- $ return 0 00:05:11.651 11:49:16 -- common/autobuild_common.sh@177 -- $ patch -p1 00:05:11.651 patching file lib/pcapng/rte_pcapng.c 00:05:11.651 Hunk #1 succeeded at 110 (offset -18 lines). 00:05:11.651 11:49:16 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:05:11.651 11:49:16 -- common/autobuild_common.sh@181 -- $ uname -s 00:05:11.651 11:49:16 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:05:11.651 11:49:16 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:05:11.651 11:49:16 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:15.837 The Meson build system 00:05:15.837 Version: 1.4.0 00:05:15.837 Source dir: /home/vagrant/spdk_repo/dpdk 00:05:15.837 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:05:15.837 Build type: native build 00:05:15.837 Program cat found: YES (/usr/bin/cat) 00:05:15.837 Project name: DPDK 00:05:15.837 Project version: 22.11.4 00:05:15.837 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:05:15.837 C linker for the host machine: gcc ld.bfd 2.38 00:05:15.837 Host machine cpu family: x86_64 00:05:15.837 Host machine cpu: x86_64 00:05:15.837 Message: ## Building in Developer Mode ## 00:05:15.837 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:15.837 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:05:15.837 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:05:15.837 Program objdump found: YES (/usr/bin/objdump) 00:05:15.837 Program python3 found: YES (/usr/bin/python3) 00:05:15.837 Program cat found: YES (/usr/bin/cat) 00:05:15.837 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:05:15.837 Checking for size of "void *" : 8 00:05:15.837 Checking for size of "void *" : 8 (cached) 00:05:15.837 Library m found: YES 00:05:15.837 Library numa found: YES 00:05:15.837 Has header "numaif.h" : YES 00:05:15.837 Library fdt found: NO 00:05:15.837 Library execinfo found: NO 00:05:15.837 Has header "execinfo.h" : YES 00:05:15.837 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:05:15.837 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:15.837 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:15.837 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:15.837 Run-time dependency openssl found: YES 3.0.2 00:05:15.837 Run-time dependency libpcap found: NO (tried pkgconfig) 00:05:15.837 Library pcap found: NO 00:05:15.837 Compiler for C supports arguments -Wcast-qual: YES 00:05:15.837 Compiler for C supports arguments -Wdeprecated: YES 00:05:15.837 Compiler for C supports arguments -Wformat: YES 00:05:15.837 Compiler for C supports arguments -Wformat-nonliteral: YES 00:05:15.837 Compiler for C supports arguments -Wformat-security: YES 00:05:15.837 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:15.837 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:15.837 Compiler for C supports arguments -Wnested-externs: YES 00:05:15.837 Compiler for C supports arguments -Wold-style-definition: YES 00:05:15.837 Compiler for C supports arguments -Wpointer-arith: YES 00:05:15.837 Compiler for C supports arguments -Wsign-compare: YES 00:05:15.837 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:15.837 Compiler for C supports arguments -Wundef: YES 00:05:15.837 Compiler for C supports arguments -Wwrite-strings: YES 00:05:15.837 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:15.837 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:15.837 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:15.838 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:15.838 Compiler for C supports arguments -mavx512f: YES 00:05:15.838 Checking if "AVX512 checking" compiles: YES 00:05:15.838 Fetching value of define "__SSE4_2__" : 1 00:05:15.838 Fetching value of define "__AES__" : 1 00:05:15.838 Fetching value of define "__AVX__" : 1 00:05:15.838 Fetching value of define "__AVX2__" : 1 00:05:15.838 Fetching value of define "__AVX512BW__" : (undefined) 00:05:15.838 Fetching value of define "__AVX512CD__" : (undefined) 00:05:15.838 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:15.838 Fetching value of define "__AVX512F__" : (undefined) 00:05:15.838 Fetching value of define "__AVX512VL__" : (undefined) 00:05:15.838 Fetching value of define "__PCLMUL__" : 1 00:05:15.838 Fetching value of define "__RDRND__" : 1 00:05:15.838 Fetching value of define "__RDSEED__" : 1 00:05:15.838 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:15.838 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:15.838 Message: lib/kvargs: Defining dependency "kvargs" 00:05:15.838 Message: lib/telemetry: Defining dependency "telemetry" 00:05:15.838 Checking for function "getentropy" : YES 00:05:15.838 Message: lib/eal: Defining dependency "eal" 00:05:15.838 Message: lib/ring: Defining dependency "ring" 00:05:15.838 Message: lib/rcu: Defining dependency "rcu" 00:05:15.838 Message: lib/mempool: Defining dependency "mempool" 00:05:15.838 Message: lib/mbuf: Defining dependency "mbuf" 00:05:15.838 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:15.838 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:15.838 Compiler for C supports arguments -mpclmul: YES 00:05:15.838 Compiler for C supports arguments -maes: YES 00:05:15.838 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:15.838 Compiler for C supports arguments -mavx512bw: YES 00:05:15.838 Compiler for C supports arguments -mavx512dq: YES 00:05:15.838 Compiler for C supports arguments -mavx512vl: YES 00:05:15.838 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:15.838 Compiler for C supports arguments -mavx2: YES 00:05:15.838 Compiler for C supports arguments -mavx: YES 00:05:15.838 Message: lib/net: Defining dependency "net" 00:05:15.838 Message: lib/meter: Defining dependency "meter" 00:05:15.838 Message: lib/ethdev: Defining dependency "ethdev" 00:05:15.838 Message: lib/pci: Defining dependency "pci" 00:05:15.838 Message: lib/cmdline: Defining dependency "cmdline" 00:05:15.838 Message: lib/metrics: Defining dependency "metrics" 00:05:15.838 Message: lib/hash: Defining dependency "hash" 00:05:15.838 Message: lib/timer: Defining dependency "timer" 00:05:15.838 Fetching value of define "__AVX2__" : 1 (cached) 00:05:15.838 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:15.838 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:05:15.838 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:05:15.838 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:05:15.838 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:05:15.838 Message: lib/acl: Defining dependency "acl" 00:05:15.838 Message: lib/bbdev: Defining dependency "bbdev" 00:05:15.838 Message: lib/bitratestats: Defining dependency "bitratestats" 00:05:15.838 Run-time dependency libelf found: YES 0.186 00:05:15.838 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:05:15.838 Message: lib/bpf: Defining dependency "bpf" 00:05:15.838 Message: lib/cfgfile: Defining dependency "cfgfile" 00:05:15.838 Message: lib/compressdev: Defining dependency "compressdev" 00:05:15.838 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:15.838 Message: lib/distributor: Defining dependency "distributor" 00:05:15.838 Message: lib/efd: Defining dependency "efd" 00:05:15.838 Message: lib/eventdev: Defining dependency "eventdev" 00:05:15.838 Message: lib/gpudev: Defining dependency "gpudev" 00:05:15.838 Message: lib/gro: Defining dependency "gro" 00:05:15.838 Message: lib/gso: Defining dependency "gso" 00:05:15.838 Message: lib/ip_frag: Defining dependency "ip_frag" 00:05:15.838 Message: lib/jobstats: Defining dependency "jobstats" 00:05:15.838 Message: lib/latencystats: Defining dependency "latencystats" 00:05:15.838 Message: lib/lpm: Defining dependency "lpm" 00:05:15.838 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:15.838 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:05:15.838 Fetching value of define "__AVX512IFMA__" : (undefined) 00:05:15.838 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:05:15.838 Message: lib/member: Defining dependency "member" 00:05:15.838 Message: lib/pcapng: Defining dependency "pcapng" 00:05:15.838 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:15.838 Message: lib/power: Defining dependency "power" 00:05:15.838 Message: lib/rawdev: Defining dependency "rawdev" 00:05:15.838 Message: lib/regexdev: Defining dependency "regexdev" 00:05:15.838 Message: lib/dmadev: Defining dependency "dmadev" 00:05:15.838 Message: lib/rib: Defining dependency "rib" 00:05:15.838 Message: lib/reorder: Defining dependency "reorder" 00:05:15.838 Message: lib/sched: Defining dependency "sched" 00:05:15.838 Message: lib/security: Defining dependency "security" 00:05:15.838 Message: lib/stack: Defining dependency "stack" 00:05:15.838 Has header "linux/userfaultfd.h" : YES 00:05:15.838 Message: lib/vhost: Defining dependency "vhost" 00:05:15.838 Message: lib/ipsec: Defining dependency "ipsec" 00:05:15.838 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:15.838 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:05:15.838 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:05:15.838 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:15.838 Message: lib/fib: Defining dependency "fib" 00:05:15.838 Message: lib/port: Defining dependency "port" 00:05:15.838 Message: lib/pdump: Defining dependency "pdump" 00:05:15.838 Message: lib/table: Defining dependency "table" 00:05:15.838 Message: lib/pipeline: Defining dependency "pipeline" 00:05:15.838 Message: lib/graph: Defining dependency "graph" 00:05:15.838 Message: lib/node: Defining dependency "node" 00:05:15.838 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:15.838 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:15.838 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:15.838 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:15.838 Compiler for C supports arguments -Wno-sign-compare: YES 00:05:15.838 Compiler for C supports arguments -Wno-unused-value: YES 00:05:15.838 Compiler for C supports arguments -Wno-format: YES 00:05:15.838 Compiler for C supports arguments -Wno-format-security: YES 00:05:17.740 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:05:17.740 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:17.740 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:05:17.740 Compiler for C supports arguments -Wno-unused-parameter: YES 00:05:17.740 Fetching value of define "__AVX2__" : 1 (cached) 00:05:17.740 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:17.740 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:17.740 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:17.740 Compiler for C supports arguments -march=skylake-avx512: YES 00:05:17.740 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:05:17.740 Program doxygen found: YES (/usr/bin/doxygen) 00:05:17.740 Configuring doxy-api.conf using configuration 00:05:17.740 Program sphinx-build found: NO 00:05:17.740 Configuring rte_build_config.h using configuration 00:05:17.740 Message: 00:05:17.740 ================= 00:05:17.740 Applications Enabled 00:05:17.740 ================= 00:05:17.740 00:05:17.740 apps: 00:05:17.740 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:05:17.740 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:05:17.740 00:05:17.740 00:05:17.740 Message: 00:05:17.740 ================= 00:05:17.740 Libraries Enabled 00:05:17.740 ================= 00:05:17.740 00:05:17.740 libs: 00:05:17.740 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:05:17.740 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:05:17.740 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:05:17.740 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:05:17.740 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:05:17.740 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:05:17.740 table, pipeline, graph, node, 00:05:17.740 00:05:17.740 Message: 00:05:17.740 =============== 00:05:17.740 Drivers Enabled 00:05:17.740 =============== 00:05:17.740 00:05:17.740 common: 00:05:17.740 00:05:17.740 bus: 00:05:17.740 pci, vdev, 00:05:17.740 mempool: 00:05:17.740 ring, 00:05:17.740 dma: 00:05:17.740 00:05:17.740 net: 00:05:17.740 i40e, 00:05:17.740 raw: 00:05:17.740 00:05:17.740 crypto: 00:05:17.740 00:05:17.740 compress: 00:05:17.740 00:05:17.740 regex: 00:05:17.740 00:05:17.740 vdpa: 00:05:17.740 00:05:17.740 event: 00:05:17.740 00:05:17.740 baseband: 00:05:17.740 00:05:17.740 gpu: 00:05:17.740 00:05:17.740 00:05:17.740 Message: 00:05:17.740 ================= 00:05:17.740 Content Skipped 00:05:17.740 ================= 00:05:17.740 00:05:17.740 apps: 00:05:17.740 dumpcap: missing dependency, "libpcap" 00:05:17.741 00:05:17.741 libs: 00:05:17.741 kni: explicitly disabled via build config (deprecated lib) 00:05:17.741 flow_classify: explicitly disabled via build config (deprecated lib) 00:05:17.741 00:05:17.741 drivers: 00:05:17.741 common/cpt: not in enabled drivers build config 00:05:17.741 common/dpaax: not in enabled drivers build config 00:05:17.741 common/iavf: not in enabled drivers build config 00:05:17.741 common/idpf: not in enabled drivers build config 00:05:17.741 common/mvep: not in enabled drivers build config 00:05:17.741 common/octeontx: not in enabled drivers build config 00:05:17.741 bus/auxiliary: not in enabled drivers build config 00:05:17.741 bus/dpaa: not in enabled drivers build config 00:05:17.741 bus/fslmc: not in enabled drivers build config 00:05:17.741 bus/ifpga: not in enabled drivers build config 00:05:17.741 bus/vmbus: not in enabled drivers build config 00:05:17.741 common/cnxk: not in enabled drivers build config 00:05:17.741 common/mlx5: not in enabled drivers build config 00:05:17.741 common/qat: not in enabled drivers build config 00:05:17.741 common/sfc_efx: not in enabled drivers build config 00:05:17.741 mempool/bucket: not in enabled drivers build config 00:05:17.741 mempool/cnxk: not in enabled drivers build config 00:05:17.741 mempool/dpaa: not in enabled drivers build config 00:05:17.741 mempool/dpaa2: not in enabled drivers build config 00:05:17.741 mempool/octeontx: not in enabled drivers build config 00:05:17.741 mempool/stack: not in enabled drivers build config 00:05:17.741 dma/cnxk: not in enabled drivers build config 00:05:17.741 dma/dpaa: not in enabled drivers build config 00:05:17.741 dma/dpaa2: not in enabled drivers build config 00:05:17.741 dma/hisilicon: not in enabled drivers build config 00:05:17.741 dma/idxd: not in enabled drivers build config 00:05:17.741 dma/ioat: not in enabled drivers build config 00:05:17.741 dma/skeleton: not in enabled drivers build config 00:05:17.741 net/af_packet: not in enabled drivers build config 00:05:17.741 net/af_xdp: not in enabled drivers build config 00:05:17.741 net/ark: not in enabled drivers build config 00:05:17.741 net/atlantic: not in enabled drivers build config 00:05:17.741 net/avp: not in enabled drivers build config 00:05:17.741 net/axgbe: not in enabled drivers build config 00:05:17.741 net/bnx2x: not in enabled drivers build config 00:05:17.741 net/bnxt: not in enabled drivers build config 00:05:17.741 net/bonding: not in enabled drivers build config 00:05:17.741 net/cnxk: not in enabled drivers build config 00:05:17.741 net/cxgbe: not in enabled drivers build config 00:05:17.741 net/dpaa: not in enabled drivers build config 00:05:17.741 net/dpaa2: not in enabled drivers build config 00:05:17.741 net/e1000: not in enabled drivers build config 00:05:17.741 net/ena: not in enabled drivers build config 00:05:17.741 net/enetc: not in enabled drivers build config 00:05:17.741 net/enetfec: not in enabled drivers build config 00:05:17.741 net/enic: not in enabled drivers build config 00:05:17.741 net/failsafe: not in enabled drivers build config 00:05:17.741 net/fm10k: not in enabled drivers build config 00:05:17.741 net/gve: not in enabled drivers build config 00:05:17.741 net/hinic: not in enabled drivers build config 00:05:17.741 net/hns3: not in enabled drivers build config 00:05:17.741 net/iavf: not in enabled drivers build config 00:05:17.741 net/ice: not in enabled drivers build config 00:05:17.741 net/idpf: not in enabled drivers build config 00:05:17.741 net/igc: not in enabled drivers build config 00:05:17.741 net/ionic: not in enabled drivers build config 00:05:17.741 net/ipn3ke: not in enabled drivers build config 00:05:17.741 net/ixgbe: not in enabled drivers build config 00:05:17.741 net/kni: not in enabled drivers build config 00:05:17.741 net/liquidio: not in enabled drivers build config 00:05:17.741 net/mana: not in enabled drivers build config 00:05:17.741 net/memif: not in enabled drivers build config 00:05:17.741 net/mlx4: not in enabled drivers build config 00:05:17.741 net/mlx5: not in enabled drivers build config 00:05:17.741 net/mvneta: not in enabled drivers build config 00:05:17.741 net/mvpp2: not in enabled drivers build config 00:05:17.741 net/netvsc: not in enabled drivers build config 00:05:17.741 net/nfb: not in enabled drivers build config 00:05:17.741 net/nfp: not in enabled drivers build config 00:05:17.741 net/ngbe: not in enabled drivers build config 00:05:17.741 net/null: not in enabled drivers build config 00:05:17.741 net/octeontx: not in enabled drivers build config 00:05:17.741 net/octeon_ep: not in enabled drivers build config 00:05:17.741 net/pcap: not in enabled drivers build config 00:05:17.741 net/pfe: not in enabled drivers build config 00:05:17.741 net/qede: not in enabled drivers build config 00:05:17.741 net/ring: not in enabled drivers build config 00:05:17.741 net/sfc: not in enabled drivers build config 00:05:17.741 net/softnic: not in enabled drivers build config 00:05:17.741 net/tap: not in enabled drivers build config 00:05:17.741 net/thunderx: not in enabled drivers build config 00:05:17.741 net/txgbe: not in enabled drivers build config 00:05:17.741 net/vdev_netvsc: not in enabled drivers build config 00:05:17.741 net/vhost: not in enabled drivers build config 00:05:17.741 net/virtio: not in enabled drivers build config 00:05:17.741 net/vmxnet3: not in enabled drivers build config 00:05:17.741 raw/cnxk_bphy: not in enabled drivers build config 00:05:17.741 raw/cnxk_gpio: not in enabled drivers build config 00:05:17.741 raw/dpaa2_cmdif: not in enabled drivers build config 00:05:17.741 raw/ifpga: not in enabled drivers build config 00:05:17.741 raw/ntb: not in enabled drivers build config 00:05:17.741 raw/skeleton: not in enabled drivers build config 00:05:17.741 crypto/armv8: not in enabled drivers build config 00:05:17.741 crypto/bcmfs: not in enabled drivers build config 00:05:17.741 crypto/caam_jr: not in enabled drivers build config 00:05:17.741 crypto/ccp: not in enabled drivers build config 00:05:17.741 crypto/cnxk: not in enabled drivers build config 00:05:17.741 crypto/dpaa_sec: not in enabled drivers build config 00:05:17.741 crypto/dpaa2_sec: not in enabled drivers build config 00:05:17.741 crypto/ipsec_mb: not in enabled drivers build config 00:05:17.741 crypto/mlx5: not in enabled drivers build config 00:05:17.741 crypto/mvsam: not in enabled drivers build config 00:05:17.741 crypto/nitrox: not in enabled drivers build config 00:05:17.741 crypto/null: not in enabled drivers build config 00:05:17.741 crypto/octeontx: not in enabled drivers build config 00:05:17.741 crypto/openssl: not in enabled drivers build config 00:05:17.741 crypto/scheduler: not in enabled drivers build config 00:05:17.741 crypto/uadk: not in enabled drivers build config 00:05:17.741 crypto/virtio: not in enabled drivers build config 00:05:17.741 compress/isal: not in enabled drivers build config 00:05:17.741 compress/mlx5: not in enabled drivers build config 00:05:17.741 compress/octeontx: not in enabled drivers build config 00:05:17.741 compress/zlib: not in enabled drivers build config 00:05:17.741 regex/mlx5: not in enabled drivers build config 00:05:17.741 regex/cn9k: not in enabled drivers build config 00:05:17.741 vdpa/ifc: not in enabled drivers build config 00:05:17.741 vdpa/mlx5: not in enabled drivers build config 00:05:17.741 vdpa/sfc: not in enabled drivers build config 00:05:17.741 event/cnxk: not in enabled drivers build config 00:05:17.741 event/dlb2: not in enabled drivers build config 00:05:17.741 event/dpaa: not in enabled drivers build config 00:05:17.741 event/dpaa2: not in enabled drivers build config 00:05:17.741 event/dsw: not in enabled drivers build config 00:05:17.741 event/opdl: not in enabled drivers build config 00:05:17.741 event/skeleton: not in enabled drivers build config 00:05:17.741 event/sw: not in enabled drivers build config 00:05:17.741 event/octeontx: not in enabled drivers build config 00:05:17.741 baseband/acc: not in enabled drivers build config 00:05:17.741 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:05:17.741 baseband/fpga_lte_fec: not in enabled drivers build config 00:05:17.741 baseband/la12xx: not in enabled drivers build config 00:05:17.741 baseband/null: not in enabled drivers build config 00:05:17.741 baseband/turbo_sw: not in enabled drivers build config 00:05:17.741 gpu/cuda: not in enabled drivers build config 00:05:17.741 00:05:17.741 00:05:17.741 Build targets in project: 313 00:05:17.741 00:05:17.741 DPDK 22.11.4 00:05:17.741 00:05:17.741 User defined options 00:05:17.741 libdir : lib 00:05:17.741 prefix : /home/vagrant/spdk_repo/dpdk/build 00:05:17.741 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:05:17.741 c_link_args : 00:05:17.741 enable_docs : false 00:05:17.741 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:17.741 enable_kmods : false 00:05:17.741 machine : native 00:05:17.741 tests : false 00:05:17.741 00:05:17.741 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:17.741 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:05:17.741 11:49:22 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:05:17.741 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:17.741 [1/740] Generating lib/rte_kvargs_def with a custom command 00:05:17.741 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:05:17.741 [3/740] Generating lib/rte_telemetry_def with a custom command 00:05:17.741 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:05:17.741 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:17.741 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:17.741 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:17.741 [8/740] Linking static target lib/librte_kvargs.a 00:05:17.741 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:17.741 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:17.741 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:17.741 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:17.741 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:17.741 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:18.066 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:18.066 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:18.066 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:18.066 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:18.066 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:18.066 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:18.066 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:05:18.066 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:18.066 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:18.066 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:18.066 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:18.066 [26/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.066 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:18.066 [28/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:18.324 [29/740] Linking target lib/librte_kvargs.so.23.0 00:05:18.324 [30/740] Linking static target lib/librte_telemetry.a 00:05:18.324 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:18.324 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:18.324 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:18.324 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:18.324 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:18.324 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:18.324 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:18.324 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:18.324 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:18.324 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:18.582 [41/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:05:18.582 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:18.582 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.582 [44/740] Linking target lib/librte_telemetry.so.23.0 00:05:18.582 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:18.582 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:18.582 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:18.582 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:18.582 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:18.840 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:05:18.840 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:18.840 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:18.840 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:18.840 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:18.840 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:18.840 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:18.840 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:18.840 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:18.840 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:18.840 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:18.840 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:18.840 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:18.840 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:18.840 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:19.097 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:19.097 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:05:19.097 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:19.097 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:19.097 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:19.097 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:19.097 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:19.097 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:19.097 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:19.097 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:19.097 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:19.097 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:19.097 [77/740] Generating lib/rte_eal_def with a custom command 00:05:19.097 [78/740] Generating lib/rte_eal_mingw with a custom command 00:05:19.097 [79/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:19.097 [80/740] Generating lib/rte_ring_def with a custom command 00:05:19.097 [81/740] Generating lib/rte_ring_mingw with a custom command 00:05:19.097 [82/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:19.355 [83/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:19.355 [84/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:19.355 [85/740] Generating lib/rte_rcu_def with a custom command 00:05:19.355 [86/740] Generating lib/rte_rcu_mingw with a custom command 00:05:19.355 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:19.355 [88/740] Linking static target lib/librte_ring.a 00:05:19.355 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:19.355 [90/740] Generating lib/rte_mempool_def with a custom command 00:05:19.355 [91/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:19.612 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:05:19.612 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:19.612 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.612 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:19.612 [96/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:19.612 [97/740] Linking static target lib/librte_eal.a 00:05:19.612 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:19.612 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:19.612 [100/740] Generating lib/rte_mbuf_def with a custom command 00:05:19.869 [101/740] Generating lib/rte_mbuf_mingw with a custom command 00:05:19.869 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:19.869 [103/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:19.869 [104/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:19.869 [105/740] Linking static target lib/librte_rcu.a 00:05:20.127 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:20.127 [107/740] Linking static target lib/librte_mempool.a 00:05:20.127 [108/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:20.127 [109/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:20.127 [110/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:20.127 [111/740] Generating lib/rte_net_def with a custom command 00:05:20.127 [112/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:20.127 [113/740] Generating lib/rte_net_mingw with a custom command 00:05:20.384 [114/740] Generating lib/rte_meter_def with a custom command 00:05:20.384 [115/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.384 [116/740] Generating lib/rte_meter_mingw with a custom command 00:05:20.384 [117/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:20.384 [118/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:20.384 [119/740] Linking static target lib/librte_meter.a 00:05:20.384 [120/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:20.384 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:20.642 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:20.642 [123/740] Linking static target lib/librte_net.a 00:05:20.642 [124/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:20.642 [125/740] Linking static target lib/librte_mbuf.a 00:05:20.642 [126/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.899 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:20.899 [128/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.899 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:20.899 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:20.899 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:21.157 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:21.157 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.157 [134/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.157 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:21.415 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:21.415 [137/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:21.415 [138/740] Generating lib/rte_ethdev_def with a custom command 00:05:21.415 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:05:21.674 [140/740] Generating lib/rte_pci_def with a custom command 00:05:21.674 [141/740] Generating lib/rte_pci_mingw with a custom command 00:05:21.674 [142/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:21.674 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:21.674 [144/740] Linking static target lib/librte_pci.a 00:05:21.674 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:21.674 [146/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:21.674 [147/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:21.674 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:21.674 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:21.674 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.932 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:21.932 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:21.932 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:21.932 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:21.932 [155/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:21.932 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:21.932 [157/740] Generating lib/rte_cmdline_def with a custom command 00:05:21.932 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:21.932 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:21.932 [160/740] Generating lib/rte_cmdline_mingw with a custom command 00:05:21.932 [161/740] Generating lib/rte_metrics_def with a custom command 00:05:21.932 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:05:22.191 [163/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:05:22.191 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:22.191 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:22.191 [166/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:22.191 [167/740] Linking static target lib/librte_cmdline.a 00:05:22.191 [168/740] Generating lib/rte_hash_def with a custom command 00:05:22.191 [169/740] Generating lib/rte_hash_mingw with a custom command 00:05:22.191 [170/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:22.191 [171/740] Generating lib/rte_timer_def with a custom command 00:05:22.191 [172/740] Generating lib/rte_timer_mingw with a custom command 00:05:22.191 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:22.453 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:05:22.453 [175/740] Linking static target lib/librte_metrics.a 00:05:22.453 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:22.453 [177/740] Linking static target lib/librte_timer.a 00:05:22.711 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.970 [179/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.970 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:05:22.970 [181/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:23.227 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:23.228 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.228 [184/740] Linking static target lib/librte_ethdev.a 00:05:23.486 [185/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:05:23.486 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:05:23.486 [187/740] Generating lib/rte_acl_def with a custom command 00:05:23.486 [188/740] Generating lib/rte_acl_mingw with a custom command 00:05:23.486 [189/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:05:23.486 [190/740] Generating lib/rte_bbdev_def with a custom command 00:05:23.486 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:05:23.486 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:05:23.486 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:05:23.486 [194/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:05:24.053 [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:05:24.053 [196/740] Linking static target lib/librte_bitratestats.a 00:05:24.053 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:05:24.053 [198/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:05:24.053 [199/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.311 [200/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:05:24.311 [201/740] Linking static target lib/librte_bbdev.a 00:05:24.569 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:05:24.569 [203/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:24.569 [204/740] Linking static target lib/librte_hash.a 00:05:24.569 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:05:24.827 [206/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:05:24.827 [207/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:05:24.827 [208/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.827 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:05:24.827 [210/740] Generating lib/rte_bpf_def with a custom command 00:05:25.084 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:05:25.084 [212/740] Generating lib/rte_cfgfile_def with a custom command 00:05:25.084 [213/740] Generating lib/rte_cfgfile_mingw with a custom command 00:05:25.084 [214/740] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:05:25.084 [215/740] Linking static target lib/acl/libavx512_tmp.a 00:05:25.084 [216/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.340 [217/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:05:25.340 [218/740] Linking static target lib/librte_cfgfile.a 00:05:25.340 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:05:25.598 [220/740] Generating lib/rte_compressdev_def with a custom command 00:05:25.598 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:05:25.598 [222/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:05:25.598 [223/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:25.598 [224/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.598 [225/740] Generating lib/rte_cryptodev_def with a custom command 00:05:25.598 [226/740] Generating lib/rte_cryptodev_mingw with a custom command 00:05:25.598 [227/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:25.856 [228/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:25.856 [229/740] Linking static target lib/librte_compressdev.a 00:05:25.856 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:05:26.115 [231/740] Linking static target lib/librte_acl.a 00:05:26.115 [232/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:05:26.115 [233/740] Linking static target lib/librte_bpf.a 00:05:26.115 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:26.115 [235/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:26.374 [236/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:05:26.374 [237/740] Generating lib/rte_distributor_def with a custom command 00:05:26.374 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.374 [239/740] Generating lib/rte_distributor_mingw with a custom command 00:05:26.374 [240/740] Generating lib/rte_efd_def with a custom command 00:05:26.374 [241/740] Generating lib/rte_efd_mingw with a custom command 00:05:26.374 [242/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.374 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:05:26.632 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:05:26.632 [245/740] Linking static target lib/librte_distributor.a 00:05:26.632 [246/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.632 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:05:26.890 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.890 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:05:27.148 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:05:27.148 [251/740] Generating lib/rte_eventdev_def with a custom command 00:05:27.148 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:05:27.406 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:05:27.406 [254/740] Linking static target lib/librte_efd.a 00:05:27.666 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:05:27.666 [256/740] Generating lib/rte_gpudev_def with a custom command 00:05:27.666 [257/740] Generating lib/rte_gpudev_mingw with a custom command 00:05:27.666 [258/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:27.666 [259/740] Linking static target lib/librte_cryptodev.a 00:05:27.666 [260/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.975 [261/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.975 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:05:27.975 [263/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:05:27.975 [264/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:05:27.975 [265/740] Linking target lib/librte_eal.so.23.0 00:05:27.975 [266/740] Linking static target lib/librte_gpudev.a 00:05:27.975 [267/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:05:27.975 [268/740] Linking target lib/librte_ring.so.23.0 00:05:28.234 [269/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:05:28.234 [270/740] Linking target lib/librte_meter.so.23.0 00:05:28.234 [271/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:05:28.234 [272/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:05:28.234 [273/740] Linking target lib/librte_rcu.so.23.0 00:05:28.234 [274/740] Linking target lib/librte_mempool.so.23.0 00:05:28.234 [275/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:05:28.234 [276/740] Linking target lib/librte_pci.so.23.0 00:05:28.493 [277/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:05:28.493 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:05:28.493 [279/740] Linking target lib/librte_timer.so.23.0 00:05:28.493 [280/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:05:28.493 [281/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.493 [282/740] Linking target lib/librte_acl.so.23.0 00:05:28.493 [283/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:05:28.493 [284/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:05:28.493 [285/740] Linking target lib/librte_mbuf.so.23.0 00:05:28.494 [286/740] Linking target lib/librte_cfgfile.so.23.0 00:05:28.494 [287/740] Generating lib/rte_gro_def with a custom command 00:05:28.494 [288/740] Generating lib/rte_gro_mingw with a custom command 00:05:28.494 [289/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:05:28.494 [290/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:05:28.494 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:05:28.494 [292/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:05:28.494 [293/740] Linking target lib/librte_net.so.23.0 00:05:28.752 [294/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:05:28.753 [295/740] Linking target lib/librte_ethdev.so.23.0 00:05:28.753 [296/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.011 [297/740] Linking target lib/librte_cmdline.so.23.0 00:05:29.011 [298/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:05:29.011 [299/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:05:29.011 [300/740] Linking target lib/librte_hash.so.23.0 00:05:29.011 [301/740] Linking target lib/librte_metrics.so.23.0 00:05:29.011 [302/740] Linking target lib/librte_bbdev.so.23.0 00:05:29.011 [303/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:05:29.011 [304/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:05:29.011 [305/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:05:29.011 [306/740] Linking target lib/librte_bpf.so.23.0 00:05:29.011 [307/740] Linking target lib/librte_compressdev.so.23.0 00:05:29.011 [308/740] Linking target lib/librte_distributor.so.23.0 00:05:29.011 [309/740] Linking target lib/librte_gpudev.so.23.0 00:05:29.011 [310/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:05:29.011 [311/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:05:29.011 [312/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:05:29.011 [313/740] Linking static target lib/librte_eventdev.a 00:05:29.011 [314/740] Generating lib/rte_gso_mingw with a custom command 00:05:29.011 [315/740] Generating lib/rte_gso_def with a custom command 00:05:29.011 [316/740] Linking target lib/librte_efd.so.23.0 00:05:29.011 [317/740] Linking target lib/librte_bitratestats.so.23.0 00:05:29.011 [318/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:05:29.270 [319/740] Linking static target lib/librte_gro.a 00:05:29.270 [320/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:05:29.270 [321/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:05:29.270 [322/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.270 [323/740] Linking target lib/librte_gro.so.23.0 00:05:29.527 [324/740] Generating lib/rte_ip_frag_def with a custom command 00:05:29.527 [325/740] Generating lib/rte_ip_frag_mingw with a custom command 00:05:29.527 [326/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:05:29.527 [327/740] Linking static target lib/librte_gso.a 00:05:29.527 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:05:29.527 [329/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:05:29.527 [330/740] Linking static target lib/librte_jobstats.a 00:05:29.527 [331/740] Generating lib/rte_jobstats_def with a custom command 00:05:29.527 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:05:29.785 [333/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.785 [334/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:05:29.785 [335/740] Linking target lib/librte_gso.so.23.0 00:05:29.785 [336/740] Generating lib/rte_latencystats_def with a custom command 00:05:29.785 [337/740] Generating lib/rte_latencystats_mingw with a custom command 00:05:29.785 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:05:29.785 [339/740] Generating lib/rte_lpm_def with a custom command 00:05:29.785 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:05:29.785 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:05:29.785 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:05:29.785 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:05:29.785 [344/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.785 [345/740] Linking static target lib/librte_ip_frag.a 00:05:30.043 [346/740] Linking target lib/librte_jobstats.so.23.0 00:05:30.301 [347/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.301 [348/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:05:30.301 [349/740] Linking target lib/librte_ip_frag.so.23.0 00:05:30.301 [350/740] Linking static target lib/librte_latencystats.a 00:05:30.301 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:05:30.301 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:05:30.301 [353/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:05:30.301 [354/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.301 [355/740] Generating lib/rte_member_def with a custom command 00:05:30.301 [356/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:05:30.301 [357/740] Generating lib/rte_member_mingw with a custom command 00:05:30.301 [358/740] Linking target lib/librte_cryptodev.so.23.0 00:05:30.301 [359/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:05:30.301 [360/740] Generating lib/rte_pcapng_def with a custom command 00:05:30.301 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:05:30.560 [362/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.560 [363/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:05:30.560 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:30.560 [365/740] Linking target lib/librte_latencystats.so.23.0 00:05:30.560 [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:30.560 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:30.819 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:30.819 [369/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:05:30.819 [370/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:05:30.819 [371/740] Linking static target lib/librte_lpm.a 00:05:30.819 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:05:31.078 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:05:31.078 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:31.078 [375/740] Generating lib/rte_power_def with a custom command 00:05:31.078 [376/740] Generating lib/rte_power_mingw with a custom command 00:05:31.078 [377/740] Generating lib/rte_rawdev_def with a custom command 00:05:31.078 [378/740] Generating lib/rte_rawdev_mingw with a custom command 00:05:31.078 [379/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:31.078 [380/740] Generating lib/rte_regexdev_def with a custom command 00:05:31.078 [381/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:05:31.078 [382/740] Generating lib/rte_regexdev_mingw with a custom command 00:05:31.078 [383/740] Linking static target lib/librte_pcapng.a 00:05:31.078 [384/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.337 [385/740] Linking target lib/librte_lpm.so.23.0 00:05:31.337 [386/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:31.337 [387/740] Generating lib/rte_dmadev_def with a custom command 00:05:31.337 [388/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:05:31.337 [389/740] Generating lib/rte_dmadev_mingw with a custom command 00:05:31.337 [390/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:05:31.337 [391/740] Generating lib/rte_rib_def with a custom command 00:05:31.337 [392/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:05:31.337 [393/740] Linking static target lib/librte_rawdev.a 00:05:31.337 [394/740] Generating lib/rte_rib_mingw with a custom command 00:05:31.337 [395/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.337 [396/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.596 [397/740] Linking target lib/librte_eventdev.so.23.0 00:05:31.596 [398/740] Linking target lib/librte_pcapng.so.23.0 00:05:31.596 [399/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:31.596 [400/740] Linking static target lib/librte_power.a 00:05:31.596 [401/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:31.596 [402/740] Linking static target lib/librte_dmadev.a 00:05:31.596 [403/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:05:31.596 [404/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:05:31.596 [405/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:05:31.596 [406/740] Linking static target lib/librte_regexdev.a 00:05:31.596 [407/740] Generating lib/rte_reorder_def with a custom command 00:05:31.596 [408/740] Generating lib/rte_reorder_mingw with a custom command 00:05:31.596 [409/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:05:31.596 [410/740] Linking static target lib/librte_member.a 00:05:31.854 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:05:31.855 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:05:31.855 [413/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.855 [414/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:05:31.855 [415/740] Linking target lib/librte_rawdev.so.23.0 00:05:31.855 [416/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:05:32.112 [417/740] Generating lib/rte_sched_mingw with a custom command 00:05:32.112 [418/740] Generating lib/rte_sched_def with a custom command 00:05:32.112 [419/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.112 [420/740] Generating lib/rte_security_def with a custom command 00:05:32.112 [421/740] Generating lib/rte_security_mingw with a custom command 00:05:32.112 [422/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:32.112 [423/740] Linking static target lib/librte_reorder.a 00:05:32.112 [424/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.112 [425/740] Linking target lib/librte_member.so.23.0 00:05:32.112 [426/740] Linking target lib/librte_dmadev.so.23.0 00:05:32.112 [427/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:05:32.112 [428/740] Generating lib/rte_stack_def with a custom command 00:05:32.112 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:05:32.112 [430/740] Generating lib/rte_stack_mingw with a custom command 00:05:32.113 [431/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:05:32.113 [432/740] Linking static target lib/librte_stack.a 00:05:32.113 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:05:32.370 [434/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.370 [435/740] Linking target lib/librte_reorder.so.23.0 00:05:32.370 [436/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.370 [437/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:05:32.370 [438/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:32.370 [439/740] Linking static target lib/librte_rib.a 00:05:32.370 [440/740] Linking target lib/librte_regexdev.so.23.0 00:05:32.370 [441/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.370 [442/740] Linking target lib/librte_stack.so.23.0 00:05:32.629 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.629 [444/740] Linking target lib/librte_power.so.23.0 00:05:32.629 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:32.629 [446/740] Linking static target lib/librte_security.a 00:05:32.888 [447/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:32.888 [448/740] Generating lib/rte_vhost_def with a custom command 00:05:32.888 [449/740] Generating lib/rte_vhost_mingw with a custom command 00:05:32.888 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:32.888 [451/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.888 [452/740] Linking target lib/librte_rib.so.23.0 00:05:33.147 [453/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:05:33.147 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:33.147 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.147 [456/740] Linking target lib/librte_security.so.23.0 00:05:33.406 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:05:33.406 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:05:33.406 [459/740] Linking static target lib/librte_sched.a 00:05:33.665 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:05:33.665 [461/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:33.665 [462/740] Generating lib/rte_ipsec_def with a custom command 00:05:33.665 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:05:33.665 [464/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:05:33.665 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:33.923 [466/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.923 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:05:33.923 [468/740] Linking target lib/librte_sched.so.23.0 00:05:33.923 [469/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:05:34.182 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:05:34.182 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:05:34.182 [472/740] Generating lib/rte_fib_def with a custom command 00:05:34.182 [473/740] Generating lib/rte_fib_mingw with a custom command 00:05:34.182 [474/740] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:05:34.182 [475/740] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:05:34.182 [476/740] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:05:34.182 [477/740] Linking static target lib/fib/libtrie_avx512_tmp.a 00:05:34.440 [478/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:05:34.440 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:05:34.697 [480/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:05:34.697 [481/740] Linking static target lib/librte_ipsec.a 00:05:34.956 [482/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:05:34.956 [483/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.956 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:05:34.956 [485/740] Linking target lib/librte_ipsec.so.23.0 00:05:34.956 [486/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:05:34.956 [487/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:05:34.956 [488/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:05:34.956 [489/740] Linking static target lib/librte_fib.a 00:05:35.214 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:05:35.472 [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.472 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:05:35.472 [493/740] Linking target lib/librte_fib.so.23.0 00:05:35.731 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:05:35.731 [495/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:05:35.731 [496/740] Generating lib/rte_port_def with a custom command 00:05:35.731 [497/740] Generating lib/rte_port_mingw with a custom command 00:05:35.731 [498/740] Generating lib/rte_pdump_def with a custom command 00:05:35.731 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:05:35.731 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:05:35.731 [501/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:05:35.731 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:05:35.989 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:05:35.989 [504/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:05:35.989 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:05:35.989 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:05:36.247 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:05:36.247 [508/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:05:36.247 [509/740] Linking static target lib/librte_port.a 00:05:36.505 [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:05:36.505 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:05:36.505 [512/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:05:36.505 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:05:36.505 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:05:36.764 [515/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:05:36.764 [516/740] Linking static target lib/librte_pdump.a 00:05:37.023 [517/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.023 [518/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.023 [519/740] Linking target lib/librte_port.so.23.0 00:05:37.023 [520/740] Linking target lib/librte_pdump.so.23.0 00:05:37.023 [521/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:05:37.023 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:05:37.023 [523/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:05:37.282 [524/740] Generating lib/rte_table_def with a custom command 00:05:37.282 [525/740] Generating lib/rte_table_mingw with a custom command 00:05:37.282 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:05:37.282 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:05:37.540 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:05:37.540 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:05:37.540 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:05:37.540 [531/740] Generating lib/rte_pipeline_def with a custom command 00:05:37.540 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:05:37.540 [533/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:05:37.540 [534/740] Linking static target lib/librte_table.a 00:05:37.798 [535/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:05:38.057 [536/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:05:38.057 [537/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:05:38.316 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:05:38.316 [539/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.316 [540/740] Linking target lib/librte_table.so.23.0 00:05:38.575 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:05:38.575 [542/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:05:38.575 [543/740] Generating lib/rte_graph_def with a custom command 00:05:38.575 [544/740] Generating lib/rte_graph_mingw with a custom command 00:05:38.575 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:05:38.575 [546/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:38.834 [547/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:05:38.834 [548/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:05:38.834 [549/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:05:38.834 [550/740] Linking static target lib/librte_graph.a 00:05:39.093 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:05:39.093 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:05:39.093 [553/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:05:39.093 [554/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:05:39.352 [555/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:05:39.352 [556/740] Generating lib/rte_node_def with a custom command 00:05:39.610 [557/740] Generating lib/rte_node_mingw with a custom command 00:05:39.610 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:05:39.610 [559/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:05:39.610 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:39.868 [561/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.868 [562/740] Linking target lib/librte_graph.so.23.0 00:05:39.868 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:39.868 [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:39.868 [565/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:05:39.868 [566/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:05:39.868 [567/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:05:39.868 [568/740] Generating drivers/rte_bus_pci_def with a custom command 00:05:39.868 [569/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:05:39.868 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:05:39.868 [571/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:39.868 [572/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:05:40.129 [573/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:05:40.129 [574/740] Generating drivers/rte_mempool_ring_def with a custom command 00:05:40.129 [575/740] Linking static target lib/librte_node.a 00:05:40.129 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:40.129 [577/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:05:40.129 [578/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:40.129 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:40.129 [580/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.129 [581/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:40.387 [582/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:40.387 [583/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:40.387 [584/740] Linking target lib/librte_node.so.23.0 00:05:40.387 [585/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:40.387 [586/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:40.387 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:40.387 [588/740] Linking static target drivers/librte_bus_vdev.a 00:05:40.387 [589/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:40.387 [590/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:40.387 [591/740] Linking static target drivers/librte_bus_pci.a 00:05:40.645 [592/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.645 [593/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:40.645 [594/740] Linking target drivers/librte_bus_vdev.so.23.0 00:05:40.645 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:05:40.904 [596/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.904 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:05:40.904 [598/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:05:40.904 [599/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:05:40.904 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:05:40.904 [601/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:40.904 [602/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:40.904 [603/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:05:41.163 [604/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:41.163 [605/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:41.163 [606/740] Linking static target drivers/librte_mempool_ring.a 00:05:41.163 [607/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:41.163 [608/740] Linking target drivers/librte_mempool_ring.so.23.0 00:05:41.163 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:05:41.730 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:05:41.730 [611/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:05:41.989 [612/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:05:41.989 [613/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:05:42.248 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:05:42.506 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:05:42.506 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:05:42.765 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:05:43.023 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:05:43.023 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:05:43.282 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:05:43.282 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:05:43.282 [622/740] Generating drivers/rte_net_i40e_def with a custom command 00:05:43.282 [623/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:05:43.541 [624/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:05:44.107 [625/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:05:44.365 [626/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:05:44.365 [627/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:05:44.365 [628/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:05:44.365 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:05:44.624 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:05:44.624 [631/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:05:44.624 [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:05:44.624 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:05:44.882 [634/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:05:45.140 [635/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:05:45.399 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:05:45.399 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:05:45.399 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:05:45.399 [639/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:05:45.658 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:05:45.659 [641/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:05:45.659 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:05:45.659 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:05:45.659 [644/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:05:45.659 [645/740] Linking static target drivers/librte_net_i40e.a 00:05:45.918 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:05:45.918 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:05:45.918 [648/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:05:46.177 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:05:46.177 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:05:46.435 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:05:46.435 [652/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.694 [653/740] Linking target drivers/librte_net_i40e.so.23.0 00:05:46.694 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:05:46.694 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:05:46.694 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:05:46.694 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:05:46.952 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:05:46.952 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:05:46.952 [660/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:46.952 [661/740] Linking static target lib/librte_vhost.a 00:05:46.952 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:05:47.210 [663/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:05:47.210 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:05:47.210 [665/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:05:47.210 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:05:47.469 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:05:47.469 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:05:47.728 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:05:48.294 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:05:48.295 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:05:48.295 [672/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.295 [673/740] Linking target lib/librte_vhost.so.23.0 00:05:48.295 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:05:48.553 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:05:48.553 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:05:48.553 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:05:48.811 [678/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:05:48.811 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:05:48.811 [680/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:05:49.069 [681/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:05:49.069 [682/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:05:49.070 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:05:49.070 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:05:49.327 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:05:49.328 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:05:49.328 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:05:49.586 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:05:49.586 [689/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:05:49.586 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:05:49.586 [691/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:05:49.843 [692/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:05:49.843 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:05:49.843 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:05:50.101 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:05:50.358 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:05:50.358 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:05:50.616 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:05:50.616 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:05:50.616 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:05:50.874 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:05:51.131 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:05:51.387 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:05:51.387 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:05:51.387 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:05:51.387 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:05:51.644 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:05:51.644 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:05:51.902 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:05:52.160 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:05:52.417 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:05:52.417 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:05:52.417 [713/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:05:52.674 [714/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:05:52.674 [715/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:05:52.674 [716/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:05:52.674 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:05:52.674 [718/740] Linking static target lib/librte_pipeline.a 00:05:52.674 [719/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:05:52.674 [720/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:05:52.932 [721/740] Linking target app/dpdk-test-acl 00:05:52.932 [722/740] Linking target app/dpdk-test-cmdline 00:05:52.932 [723/740] Linking target app/dpdk-test-bbdev 00:05:53.191 [724/740] Linking target app/dpdk-pdump 00:05:53.191 [725/740] Linking target app/dpdk-test-compress-perf 00:05:53.191 [726/740] Linking target app/dpdk-proc-info 00:05:53.191 [727/740] Linking target app/dpdk-test-crypto-perf 00:05:53.451 [728/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:05:53.451 [729/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:05:53.451 [730/740] Linking target app/dpdk-test-fib 00:05:53.451 [731/740] Linking target app/dpdk-test-eventdev 00:05:53.451 [732/740] Linking target app/dpdk-test-flow-perf 00:05:53.451 [733/740] Linking target app/dpdk-test-gpudev 00:05:53.451 [734/740] Linking target app/dpdk-test-regex 00:05:53.451 [735/740] Linking target app/dpdk-test-pipeline 00:05:53.451 [736/740] Linking target app/dpdk-test-sad 00:05:53.731 [737/740] Linking target app/dpdk-test-security-perf 00:05:53.989 [738/740] Linking target app/dpdk-testpmd 00:05:56.518 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.518 [740/740] Linking target lib/librte_pipeline.so.23.0 00:05:56.518 11:50:01 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:05:56.518 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:56.518 [0/1] Installing files. 00:05:56.777 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:56.777 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:57.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.040 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:57.041 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:57.042 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:57.043 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:57.043 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.043 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:57.302 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:57.302 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:57.302 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.302 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:57.302 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.302 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.563 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.564 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.565 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:05:57.566 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:05:57.566 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:05:57.566 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:05:57.566 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:05:57.566 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:05:57.566 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:05:57.566 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:05:57.566 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:05:57.566 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:05:57.566 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:05:57.566 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:05:57.566 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:05:57.566 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:05:57.566 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:05:57.566 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:05:57.566 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:05:57.566 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:05:57.566 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:05:57.566 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:05:57.566 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:05:57.566 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:05:57.566 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:05:57.566 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:05:57.566 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:05:57.566 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:05:57.566 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:05:57.566 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:05:57.566 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:05:57.566 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:05:57.566 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:05:57.566 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:05:57.566 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:05:57.567 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:05:57.567 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:05:57.567 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:05:57.567 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:05:57.567 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:05:57.567 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:05:57.567 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:05:57.567 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:05:57.567 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:05:57.567 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:05:57.567 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:05:57.567 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:05:57.567 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:05:57.567 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:05:57.567 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:05:57.567 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:05:57.567 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:05:57.567 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:05:57.567 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:05:57.567 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:05:57.567 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:05:57.567 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:05:57.567 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:05:57.567 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:05:57.567 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:05:57.567 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:05:57.567 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:05:57.567 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:05:57.567 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:05:57.567 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:05:57.567 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:05:57.567 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:05:57.567 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:05:57.567 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:05:57.567 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:05:57.567 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:05:57.567 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:05:57.567 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:05:57.567 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:05:57.567 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:05:57.567 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:05:57.567 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:05:57.567 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:05:57.567 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:05:57.567 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:05:57.567 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:05:57.567 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:05:57.567 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:05:57.567 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:05:57.567 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:05:57.567 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:05:57.567 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:05:57.567 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:05:57.567 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:05:57.567 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:05:57.567 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:05:57.567 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:05:57.567 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:05:57.567 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:05:57.567 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:05:57.567 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:05:57.567 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:05:57.567 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:05:57.567 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:05:57.567 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:05:57.567 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:05:57.567 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:05:57.567 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:05:57.567 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:05:57.567 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:05:57.567 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:05:57.567 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:05:57.567 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:05:57.567 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:05:57.567 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:05:57.567 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:05:57.567 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:05:57.567 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:05:57.567 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:05:57.567 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:05:57.567 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:05:57.567 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:05:57.567 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:05:57.567 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:05:57.567 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:05:57.567 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:05:57.567 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:57.567 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:05:57.567 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:57.567 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:05:57.567 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:57.567 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:05:57.567 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:57.567 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:05:57.567 11:50:02 -- common/autobuild_common.sh@192 -- $ uname -s 00:05:57.567 11:50:02 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:05:57.567 11:50:02 -- common/autobuild_common.sh@203 -- $ cat 00:05:57.567 11:50:02 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:57.567 00:05:57.567 real 0m46.496s 00:05:57.567 user 5m12.529s 00:05:57.567 sys 0m46.616s 00:05:57.567 11:50:02 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:57.567 11:50:02 -- common/autotest_common.sh@10 -- $ set +x 00:05:57.567 ************************************ 00:05:57.567 END TEST build_native_dpdk 00:05:57.567 ************************************ 00:05:57.567 11:50:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:57.567 11:50:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:57.567 11:50:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:57.567 11:50:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:57.567 11:50:03 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:05:57.567 11:50:03 -- spdk/autobuild.sh@58 -- $ unittest_build 00:05:57.567 11:50:03 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:05:57.567 11:50:03 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:05:57.567 11:50:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:57.567 11:50:03 -- common/autotest_common.sh@10 -- $ set +x 00:05:57.567 ************************************ 00:05:57.567 START TEST unittest_build 00:05:57.567 ************************************ 00:05:57.568 11:50:03 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:05:57.568 11:50:03 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:05:57.826 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:05:57.826 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:05:57.826 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:05:57.826 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:58.084 Using 'verbs' RDMA provider 00:06:10.847 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:06:23.042 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:06:23.042 Creating mk/config.mk...done. 00:06:23.042 Creating mk/cc.flags.mk...done. 00:06:23.042 Type 'make' to build. 00:06:23.042 11:50:28 -- common/autobuild_common.sh@408 -- $ make -j10 00:06:23.042 make[1]: Nothing to be done for 'all'. 00:06:44.966 CC lib/log/log.o 00:06:44.966 CC lib/log/log_deprecated.o 00:06:44.966 CC lib/log/log_flags.o 00:06:44.966 CC lib/ut_mock/mock.o 00:06:44.966 CC lib/ut/ut.o 00:06:44.966 LIB libspdk_ut_mock.a 00:06:44.966 LIB libspdk_log.a 00:06:44.966 LIB libspdk_ut.a 00:06:44.966 CC lib/dma/dma.o 00:06:44.966 CC lib/ioat/ioat.o 00:06:44.966 CC lib/util/base64.o 00:06:44.966 CXX lib/trace_parser/trace.o 00:06:44.966 CC lib/util/bit_array.o 00:06:44.966 CC lib/util/cpuset.o 00:06:44.966 CC lib/util/crc16.o 00:06:44.966 CC lib/util/crc32.o 00:06:44.966 CC lib/util/crc32c.o 00:06:44.966 CC lib/vfio_user/host/vfio_user_pci.o 00:06:44.966 CC lib/vfio_user/host/vfio_user.o 00:06:44.966 CC lib/util/crc32_ieee.o 00:06:44.966 LIB libspdk_dma.a 00:06:44.966 CC lib/util/crc64.o 00:06:44.966 CC lib/util/dif.o 00:06:44.966 CC lib/util/fd.o 00:06:44.966 CC lib/util/file.o 00:06:44.966 CC lib/util/hexlify.o 00:06:44.966 LIB libspdk_ioat.a 00:06:44.966 CC lib/util/iov.o 00:06:44.966 CC lib/util/math.o 00:06:44.966 CC lib/util/pipe.o 00:06:44.966 CC lib/util/strerror_tls.o 00:06:44.966 CC lib/util/string.o 00:06:44.966 LIB libspdk_vfio_user.a 00:06:44.966 CC lib/util/uuid.o 00:06:44.966 CC lib/util/fd_group.o 00:06:44.966 CC lib/util/xor.o 00:06:44.966 CC lib/util/zipf.o 00:06:45.225 LIB libspdk_util.a 00:06:45.483 CC lib/conf/conf.o 00:06:45.483 CC lib/vmd/vmd.o 00:06:45.483 CC lib/vmd/led.o 00:06:45.483 CC lib/rdma/common.o 00:06:45.483 CC lib/idxd/idxd.o 00:06:45.483 CC lib/rdma/rdma_verbs.o 00:06:45.483 CC lib/json/json_parse.o 00:06:45.483 CC lib/json/json_util.o 00:06:45.483 CC lib/env_dpdk/env.o 00:06:45.483 LIB libspdk_trace_parser.a 00:06:45.483 CC lib/env_dpdk/memory.o 00:06:45.483 CC lib/env_dpdk/pci.o 00:06:45.483 CC lib/env_dpdk/init.o 00:06:45.742 LIB libspdk_conf.a 00:06:45.742 CC lib/idxd/idxd_user.o 00:06:45.742 CC lib/json/json_write.o 00:06:45.742 CC lib/env_dpdk/threads.o 00:06:45.742 LIB libspdk_rdma.a 00:06:45.742 CC lib/env_dpdk/pci_ioat.o 00:06:45.742 CC lib/env_dpdk/pci_virtio.o 00:06:45.742 CC lib/env_dpdk/pci_vmd.o 00:06:45.742 CC lib/env_dpdk/pci_idxd.o 00:06:46.000 CC lib/env_dpdk/pci_event.o 00:06:46.000 CC lib/env_dpdk/sigbus_handler.o 00:06:46.000 CC lib/env_dpdk/pci_dpdk.o 00:06:46.000 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:46.000 LIB libspdk_json.a 00:06:46.000 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:46.000 LIB libspdk_idxd.a 00:06:46.000 CC lib/jsonrpc/jsonrpc_server.o 00:06:46.000 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:46.000 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:46.000 CC lib/jsonrpc/jsonrpc_client.o 00:06:46.259 LIB libspdk_vmd.a 00:06:46.259 LIB libspdk_jsonrpc.a 00:06:46.518 CC lib/rpc/rpc.o 00:06:46.776 LIB libspdk_rpc.a 00:06:46.776 CC lib/sock/sock.o 00:06:46.776 CC lib/sock/sock_rpc.o 00:06:46.776 CC lib/trace/trace.o 00:06:46.776 CC lib/trace/trace_flags.o 00:06:46.776 CC lib/trace/trace_rpc.o 00:06:46.776 CC lib/notify/notify.o 00:06:46.776 CC lib/notify/notify_rpc.o 00:06:47.035 LIB libspdk_notify.a 00:06:47.035 LIB libspdk_env_dpdk.a 00:06:47.035 LIB libspdk_trace.a 00:06:47.293 CC lib/thread/thread.o 00:06:47.293 CC lib/thread/iobuf.o 00:06:47.293 LIB libspdk_sock.a 00:06:47.293 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:47.293 CC lib/nvme/nvme_fabric.o 00:06:47.293 CC lib/nvme/nvme_ctrlr.o 00:06:47.293 CC lib/nvme/nvme_ns_cmd.o 00:06:47.293 CC lib/nvme/nvme_ns.o 00:06:47.293 CC lib/nvme/nvme_pcie_common.o 00:06:47.293 CC lib/nvme/nvme_qpair.o 00:06:47.293 CC lib/nvme/nvme_pcie.o 00:06:47.552 CC lib/nvme/nvme.o 00:06:48.118 CC lib/nvme/nvme_quirks.o 00:06:48.118 CC lib/nvme/nvme_transport.o 00:06:48.118 CC lib/nvme/nvme_discovery.o 00:06:48.376 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:48.376 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:48.376 CC lib/nvme/nvme_tcp.o 00:06:48.376 CC lib/nvme/nvme_opal.o 00:06:48.376 CC lib/nvme/nvme_io_msg.o 00:06:48.634 CC lib/nvme/nvme_poll_group.o 00:06:48.634 CC lib/nvme/nvme_zns.o 00:06:48.634 CC lib/nvme/nvme_cuse.o 00:06:48.893 CC lib/nvme/nvme_vfio_user.o 00:06:48.893 CC lib/nvme/nvme_rdma.o 00:06:49.151 LIB libspdk_thread.a 00:06:49.409 CC lib/blob/blobstore.o 00:06:49.409 CC lib/blob/request.o 00:06:49.409 CC lib/blob/zeroes.o 00:06:49.409 CC lib/init/json_config.o 00:06:49.409 CC lib/accel/accel.o 00:06:49.409 CC lib/virtio/virtio.o 00:06:49.667 CC lib/blob/blob_bs_dev.o 00:06:49.667 CC lib/virtio/virtio_vhost_user.o 00:06:49.667 CC lib/virtio/virtio_vfio_user.o 00:06:49.667 CC lib/init/subsystem.o 00:06:49.935 CC lib/init/subsystem_rpc.o 00:06:49.935 CC lib/init/rpc.o 00:06:49.935 CC lib/accel/accel_rpc.o 00:06:49.935 CC lib/accel/accel_sw.o 00:06:49.935 CC lib/virtio/virtio_pci.o 00:06:49.935 LIB libspdk_init.a 00:06:50.210 CC lib/event/app.o 00:06:50.210 CC lib/event/reactor.o 00:06:50.211 CC lib/event/log_rpc.o 00:06:50.211 CC lib/event/app_rpc.o 00:06:50.211 CC lib/event/scheduler_static.o 00:06:50.211 LIB libspdk_nvme.a 00:06:50.467 LIB libspdk_virtio.a 00:06:50.769 LIB libspdk_event.a 00:06:50.769 LIB libspdk_accel.a 00:06:50.769 CC lib/bdev/bdev.o 00:06:50.769 CC lib/bdev/bdev_rpc.o 00:06:50.769 CC lib/bdev/bdev_zone.o 00:06:50.769 CC lib/bdev/part.o 00:06:50.769 CC lib/bdev/scsi_nvme.o 00:06:53.300 LIB libspdk_blob.a 00:06:53.300 CC lib/blobfs/blobfs.o 00:06:53.300 CC lib/blobfs/tree.o 00:06:53.300 CC lib/lvol/lvol.o 00:06:53.867 LIB libspdk_bdev.a 00:06:53.867 LIB libspdk_blobfs.a 00:06:54.125 LIB libspdk_lvol.a 00:06:54.125 CC lib/nbd/nbd.o 00:06:54.125 CC lib/nbd/nbd_rpc.o 00:06:54.125 CC lib/nvmf/ctrlr.o 00:06:54.125 CC lib/nvmf/ctrlr_discovery.o 00:06:54.125 CC lib/nvmf/ctrlr_bdev.o 00:06:54.125 CC lib/nvmf/subsystem.o 00:06:54.125 CC lib/ftl/ftl_core.o 00:06:54.125 CC lib/ftl/ftl_init.o 00:06:54.125 CC lib/nvmf/nvmf.o 00:06:54.125 CC lib/scsi/dev.o 00:06:54.125 CC lib/scsi/lun.o 00:06:54.383 CC lib/ftl/ftl_layout.o 00:06:54.383 CC lib/nvmf/nvmf_rpc.o 00:06:54.641 CC lib/scsi/port.o 00:06:54.641 LIB libspdk_nbd.a 00:06:54.641 CC lib/scsi/scsi.o 00:06:54.641 CC lib/ftl/ftl_debug.o 00:06:54.641 CC lib/ftl/ftl_io.o 00:06:54.641 CC lib/ftl/ftl_sb.o 00:06:54.641 CC lib/ftl/ftl_l2p.o 00:06:54.641 CC lib/scsi/scsi_bdev.o 00:06:54.897 CC lib/scsi/scsi_pr.o 00:06:54.897 CC lib/scsi/scsi_rpc.o 00:06:54.897 CC lib/scsi/task.o 00:06:54.897 CC lib/ftl/ftl_l2p_flat.o 00:06:54.897 CC lib/nvmf/transport.o 00:06:55.155 CC lib/ftl/ftl_nv_cache.o 00:06:55.155 CC lib/nvmf/tcp.o 00:06:55.155 CC lib/nvmf/rdma.o 00:06:55.155 CC lib/ftl/ftl_band.o 00:06:55.155 CC lib/ftl/ftl_band_ops.o 00:06:55.413 CC lib/ftl/ftl_writer.o 00:06:55.413 CC lib/ftl/ftl_rq.o 00:06:55.413 LIB libspdk_scsi.a 00:06:55.413 CC lib/ftl/ftl_reloc.o 00:06:55.672 CC lib/ftl/ftl_l2p_cache.o 00:06:55.672 CC lib/ftl/ftl_p2l.o 00:06:55.672 CC lib/iscsi/conn.o 00:06:55.930 CC lib/vhost/vhost.o 00:06:55.930 CC lib/iscsi/init_grp.o 00:06:55.930 CC lib/ftl/mngt/ftl_mngt.o 00:06:55.930 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:55.930 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:56.189 CC lib/iscsi/iscsi.o 00:06:56.189 CC lib/iscsi/md5.o 00:06:56.189 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:56.189 CC lib/iscsi/param.o 00:06:56.189 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:56.189 CC lib/iscsi/portal_grp.o 00:06:56.447 CC lib/vhost/vhost_rpc.o 00:06:56.447 CC lib/iscsi/tgt_node.o 00:06:56.447 CC lib/iscsi/iscsi_subsystem.o 00:06:56.447 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:56.705 CC lib/iscsi/iscsi_rpc.o 00:06:56.705 CC lib/iscsi/task.o 00:06:56.705 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:56.964 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:56.964 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:56.964 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:56.964 CC lib/vhost/vhost_scsi.o 00:06:56.964 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:56.964 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:56.964 CC lib/vhost/vhost_blk.o 00:06:56.964 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:56.964 CC lib/ftl/utils/ftl_conf.o 00:06:57.222 CC lib/ftl/utils/ftl_md.o 00:06:57.222 CC lib/ftl/utils/ftl_mempool.o 00:06:57.222 CC lib/ftl/utils/ftl_bitmap.o 00:06:57.222 CC lib/ftl/utils/ftl_property.o 00:06:57.222 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:57.480 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:57.480 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:57.480 CC lib/vhost/rte_vhost_user.o 00:06:57.480 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:57.480 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:57.738 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:57.738 LIB libspdk_nvmf.a 00:06:57.738 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:57.738 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:57.738 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:57.738 CC lib/ftl/base/ftl_base_dev.o 00:06:57.738 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:57.738 CC lib/ftl/base/ftl_base_bdev.o 00:06:57.738 CC lib/ftl/ftl_trace.o 00:06:57.997 LIB libspdk_iscsi.a 00:06:58.256 LIB libspdk_ftl.a 00:06:58.515 LIB libspdk_vhost.a 00:06:58.774 CC module/env_dpdk/env_dpdk_rpc.o 00:06:58.774 CC module/accel/iaa/accel_iaa.o 00:06:58.774 CC module/accel/ioat/accel_ioat.o 00:06:58.774 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:58.774 CC module/scheduler/gscheduler/gscheduler.o 00:06:58.774 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:58.774 CC module/accel/dsa/accel_dsa.o 00:06:58.774 CC module/blob/bdev/blob_bdev.o 00:06:58.774 CC module/accel/error/accel_error.o 00:06:58.774 CC module/sock/posix/posix.o 00:06:59.033 LIB libspdk_env_dpdk_rpc.a 00:06:59.033 LIB libspdk_scheduler_dpdk_governor.a 00:06:59.033 CC module/accel/ioat/accel_ioat_rpc.o 00:06:59.033 CC module/accel/iaa/accel_iaa_rpc.o 00:06:59.033 LIB libspdk_scheduler_dynamic.a 00:06:59.033 CC module/accel/dsa/accel_dsa_rpc.o 00:06:59.033 CC module/accel/error/accel_error_rpc.o 00:06:59.033 LIB libspdk_scheduler_gscheduler.a 00:06:59.033 LIB libspdk_blob_bdev.a 00:06:59.033 LIB libspdk_accel_ioat.a 00:06:59.033 LIB libspdk_accel_iaa.a 00:06:59.033 LIB libspdk_accel_dsa.a 00:06:59.033 LIB libspdk_accel_error.a 00:06:59.292 CC module/bdev/error/vbdev_error.o 00:06:59.292 CC module/bdev/gpt/gpt.o 00:06:59.292 CC module/blobfs/bdev/blobfs_bdev.o 00:06:59.292 CC module/bdev/lvol/vbdev_lvol.o 00:06:59.292 CC module/bdev/delay/vbdev_delay.o 00:06:59.292 CC module/bdev/malloc/bdev_malloc.o 00:06:59.292 CC module/bdev/null/bdev_null.o 00:06:59.292 CC module/bdev/passthru/vbdev_passthru.o 00:06:59.292 CC module/bdev/nvme/bdev_nvme.o 00:06:59.292 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:59.550 CC module/bdev/gpt/vbdev_gpt.o 00:06:59.550 CC module/bdev/error/vbdev_error_rpc.o 00:06:59.550 LIB libspdk_blobfs_bdev.a 00:06:59.550 CC module/bdev/null/bdev_null_rpc.o 00:06:59.550 LIB libspdk_sock_posix.a 00:06:59.550 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:59.550 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:59.550 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:59.550 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:59.808 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:59.808 LIB libspdk_bdev_error.a 00:06:59.808 CC module/bdev/nvme/nvme_rpc.o 00:06:59.808 LIB libspdk_bdev_passthru.a 00:06:59.808 LIB libspdk_bdev_delay.a 00:06:59.808 LIB libspdk_bdev_null.a 00:06:59.808 LIB libspdk_bdev_gpt.a 00:06:59.808 LIB libspdk_bdev_malloc.a 00:06:59.808 CC module/bdev/raid/bdev_raid.o 00:06:59.808 CC module/bdev/raid/bdev_raid_rpc.o 00:06:59.808 CC module/bdev/aio/bdev_aio.o 00:07:00.067 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:00.067 CC module/bdev/ftl/bdev_ftl.o 00:07:00.067 CC module/bdev/split/vbdev_split.o 00:07:00.067 LIB libspdk_bdev_lvol.a 00:07:00.067 CC module/bdev/split/vbdev_split_rpc.o 00:07:00.067 CC module/bdev/raid/bdev_raid_sb.o 00:07:00.344 LIB libspdk_bdev_split.a 00:07:00.344 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:00.344 CC module/bdev/iscsi/bdev_iscsi.o 00:07:00.344 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:00.344 CC module/bdev/aio/bdev_aio_rpc.o 00:07:00.344 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:00.344 CC module/bdev/raid/raid0.o 00:07:00.344 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:00.344 CC module/bdev/raid/raid1.o 00:07:00.344 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:00.602 LIB libspdk_bdev_aio.a 00:07:00.602 LIB libspdk_bdev_ftl.a 00:07:00.602 LIB libspdk_bdev_zone_block.a 00:07:00.602 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:00.602 CC module/bdev/nvme/bdev_mdns_client.o 00:07:00.602 CC module/bdev/raid/concat.o 00:07:00.602 CC module/bdev/raid/raid5f.o 00:07:00.602 LIB libspdk_bdev_iscsi.a 00:07:00.602 CC module/bdev/nvme/vbdev_opal.o 00:07:00.602 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:00.861 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:00.861 LIB libspdk_bdev_virtio.a 00:07:01.120 LIB libspdk_bdev_raid.a 00:07:01.687 LIB libspdk_bdev_nvme.a 00:07:02.254 CC module/event/subsystems/vmd/vmd.o 00:07:02.254 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:02.254 CC module/event/subsystems/sock/sock.o 00:07:02.254 CC module/event/subsystems/iobuf/iobuf.o 00:07:02.254 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:02.254 CC module/event/subsystems/scheduler/scheduler.o 00:07:02.254 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:02.254 LIB libspdk_event_sock.a 00:07:02.254 LIB libspdk_event_scheduler.a 00:07:02.254 LIB libspdk_event_vhost_blk.a 00:07:02.254 LIB libspdk_event_vmd.a 00:07:02.254 LIB libspdk_event_iobuf.a 00:07:02.513 CC module/event/subsystems/accel/accel.o 00:07:02.513 LIB libspdk_event_accel.a 00:07:02.772 CC module/event/subsystems/bdev/bdev.o 00:07:03.030 LIB libspdk_event_bdev.a 00:07:03.030 CC module/event/subsystems/nbd/nbd.o 00:07:03.030 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:03.030 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:03.030 CC module/event/subsystems/scsi/scsi.o 00:07:03.289 LIB libspdk_event_nbd.a 00:07:03.289 LIB libspdk_event_scsi.a 00:07:03.289 LIB libspdk_event_nvmf.a 00:07:03.289 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:03.289 CC module/event/subsystems/iscsi/iscsi.o 00:07:03.548 LIB libspdk_event_vhost_scsi.a 00:07:03.548 LIB libspdk_event_iscsi.a 00:07:03.807 CXX app/trace/trace.o 00:07:03.807 CC app/trace_record/trace_record.o 00:07:03.807 CC app/nvmf_tgt/nvmf_main.o 00:07:03.807 CC examples/bdev/hello_world/hello_bdev.o 00:07:03.807 CC examples/accel/perf/accel_perf.o 00:07:03.807 CC app/iscsi_tgt/iscsi_tgt.o 00:07:03.807 CC app/spdk_tgt/spdk_tgt.o 00:07:03.807 CC examples/ioat/perf/perf.o 00:07:03.807 CC test/accel/dif/dif.o 00:07:03.807 CC examples/blob/hello_world/hello_blob.o 00:07:04.066 LINK nvmf_tgt 00:07:04.066 LINK spdk_tgt 00:07:04.066 LINK iscsi_tgt 00:07:04.066 LINK hello_bdev 00:07:04.066 LINK spdk_trace_record 00:07:04.066 LINK ioat_perf 00:07:04.325 LINK hello_blob 00:07:04.325 LINK spdk_trace 00:07:04.325 LINK accel_perf 00:07:04.325 LINK dif 00:07:04.891 CC examples/blob/cli/blobcli.o 00:07:04.891 CC examples/bdev/bdevperf/bdevperf.o 00:07:04.891 CC examples/ioat/verify/verify.o 00:07:05.150 LINK verify 00:07:05.150 LINK blobcli 00:07:05.716 CC test/app/bdev_svc/bdev_svc.o 00:07:05.716 LINK bdevperf 00:07:05.974 LINK bdev_svc 00:07:05.974 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:06.540 LINK nvme_fuzz 00:07:07.107 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:07.365 CC test/app/histogram_perf/histogram_perf.o 00:07:07.365 CC test/bdev/bdevio/bdevio.o 00:07:07.623 TEST_HEADER include/spdk/accel.h 00:07:07.623 TEST_HEADER include/spdk/accel_module.h 00:07:07.623 TEST_HEADER include/spdk/assert.h 00:07:07.623 TEST_HEADER include/spdk/barrier.h 00:07:07.623 TEST_HEADER include/spdk/base64.h 00:07:07.623 TEST_HEADER include/spdk/bdev.h 00:07:07.623 TEST_HEADER include/spdk/bdev_module.h 00:07:07.623 TEST_HEADER include/spdk/bdev_zone.h 00:07:07.623 TEST_HEADER include/spdk/bit_array.h 00:07:07.623 TEST_HEADER include/spdk/bit_pool.h 00:07:07.623 TEST_HEADER include/spdk/blob.h 00:07:07.623 TEST_HEADER include/spdk/blob_bdev.h 00:07:07.623 TEST_HEADER include/spdk/blobfs.h 00:07:07.623 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:07.623 TEST_HEADER include/spdk/conf.h 00:07:07.623 TEST_HEADER include/spdk/config.h 00:07:07.623 LINK histogram_perf 00:07:07.623 TEST_HEADER include/spdk/cpuset.h 00:07:07.623 TEST_HEADER include/spdk/crc16.h 00:07:07.623 TEST_HEADER include/spdk/crc32.h 00:07:07.623 CC test/blobfs/mkfs/mkfs.o 00:07:07.623 TEST_HEADER include/spdk/crc64.h 00:07:07.623 TEST_HEADER include/spdk/dif.h 00:07:07.623 TEST_HEADER include/spdk/dma.h 00:07:07.623 TEST_HEADER include/spdk/endian.h 00:07:07.623 TEST_HEADER include/spdk/env.h 00:07:07.623 TEST_HEADER include/spdk/env_dpdk.h 00:07:07.623 TEST_HEADER include/spdk/event.h 00:07:07.623 TEST_HEADER include/spdk/fd.h 00:07:07.623 TEST_HEADER include/spdk/fd_group.h 00:07:07.623 TEST_HEADER include/spdk/file.h 00:07:07.623 TEST_HEADER include/spdk/ftl.h 00:07:07.623 TEST_HEADER include/spdk/gpt_spec.h 00:07:07.623 TEST_HEADER include/spdk/hexlify.h 00:07:07.623 TEST_HEADER include/spdk/histogram_data.h 00:07:07.882 TEST_HEADER include/spdk/idxd.h 00:07:07.882 TEST_HEADER include/spdk/idxd_spec.h 00:07:07.882 LINK mkfs 00:07:07.882 TEST_HEADER include/spdk/init.h 00:07:07.882 TEST_HEADER include/spdk/ioat.h 00:07:07.882 TEST_HEADER include/spdk/ioat_spec.h 00:07:07.882 TEST_HEADER include/spdk/iscsi_spec.h 00:07:07.882 TEST_HEADER include/spdk/json.h 00:07:07.882 TEST_HEADER include/spdk/jsonrpc.h 00:07:07.882 TEST_HEADER include/spdk/likely.h 00:07:07.882 TEST_HEADER include/spdk/log.h 00:07:07.882 TEST_HEADER include/spdk/lvol.h 00:07:07.882 TEST_HEADER include/spdk/memory.h 00:07:07.882 TEST_HEADER include/spdk/mmio.h 00:07:07.882 TEST_HEADER include/spdk/nbd.h 00:07:07.882 TEST_HEADER include/spdk/notify.h 00:07:07.882 TEST_HEADER include/spdk/nvme.h 00:07:07.882 TEST_HEADER include/spdk/nvme_intel.h 00:07:07.882 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:07.882 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:07.882 TEST_HEADER include/spdk/nvme_spec.h 00:07:07.882 TEST_HEADER include/spdk/nvme_zns.h 00:07:07.882 TEST_HEADER include/spdk/nvmf.h 00:07:07.882 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:07.882 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:07.882 TEST_HEADER include/spdk/nvmf_spec.h 00:07:07.882 TEST_HEADER include/spdk/nvmf_transport.h 00:07:07.882 TEST_HEADER include/spdk/opal.h 00:07:07.882 TEST_HEADER include/spdk/opal_spec.h 00:07:07.882 TEST_HEADER include/spdk/pci_ids.h 00:07:07.882 TEST_HEADER include/spdk/pipe.h 00:07:07.882 TEST_HEADER include/spdk/queue.h 00:07:07.882 TEST_HEADER include/spdk/reduce.h 00:07:07.882 TEST_HEADER include/spdk/rpc.h 00:07:07.882 TEST_HEADER include/spdk/scheduler.h 00:07:07.882 TEST_HEADER include/spdk/scsi.h 00:07:07.882 TEST_HEADER include/spdk/scsi_spec.h 00:07:07.882 TEST_HEADER include/spdk/sock.h 00:07:07.882 TEST_HEADER include/spdk/stdinc.h 00:07:07.882 TEST_HEADER include/spdk/string.h 00:07:07.882 TEST_HEADER include/spdk/thread.h 00:07:07.882 TEST_HEADER include/spdk/trace.h 00:07:07.882 TEST_HEADER include/spdk/trace_parser.h 00:07:07.882 TEST_HEADER include/spdk/tree.h 00:07:07.882 TEST_HEADER include/spdk/ublk.h 00:07:07.882 TEST_HEADER include/spdk/util.h 00:07:07.882 TEST_HEADER include/spdk/uuid.h 00:07:07.882 TEST_HEADER include/spdk/version.h 00:07:07.882 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:07.882 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:07.882 TEST_HEADER include/spdk/vhost.h 00:07:07.882 TEST_HEADER include/spdk/vmd.h 00:07:07.882 TEST_HEADER include/spdk/xor.h 00:07:07.882 CC app/spdk_lspci/spdk_lspci.o 00:07:07.882 TEST_HEADER include/spdk/zipf.h 00:07:07.882 CXX test/cpp_headers/accel.o 00:07:07.882 CC test/dma/test_dma/test_dma.o 00:07:07.882 LINK bdevio 00:07:08.140 CXX test/cpp_headers/accel_module.o 00:07:08.140 LINK spdk_lspci 00:07:08.140 CXX test/cpp_headers/assert.o 00:07:08.140 CXX test/cpp_headers/barrier.o 00:07:08.400 CC app/spdk_nvme_perf/perf.o 00:07:08.400 LINK test_dma 00:07:08.400 CXX test/cpp_headers/base64.o 00:07:08.659 CC app/spdk_nvme_identify/identify.o 00:07:08.659 CXX test/cpp_headers/bdev.o 00:07:08.917 CXX test/cpp_headers/bdev_module.o 00:07:08.917 CC examples/nvme/hello_world/hello_world.o 00:07:09.175 CXX test/cpp_headers/bdev_zone.o 00:07:09.175 CXX test/cpp_headers/bit_array.o 00:07:09.175 LINK iscsi_fuzz 00:07:09.175 LINK spdk_nvme_perf 00:07:09.175 LINK hello_world 00:07:09.434 CXX test/cpp_headers/bit_pool.o 00:07:09.434 CC test/app/jsoncat/jsoncat.o 00:07:09.434 CC test/app/stub/stub.o 00:07:09.434 LINK jsoncat 00:07:09.434 CXX test/cpp_headers/blob.o 00:07:09.693 LINK spdk_nvme_identify 00:07:09.693 LINK stub 00:07:09.693 CXX test/cpp_headers/blob_bdev.o 00:07:09.950 CXX test/cpp_headers/blobfs.o 00:07:10.208 CC examples/nvme/reconnect/reconnect.o 00:07:10.208 CXX test/cpp_headers/blobfs_bdev.o 00:07:10.467 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:10.467 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:10.467 CXX test/cpp_headers/conf.o 00:07:10.467 CXX test/cpp_headers/config.o 00:07:10.467 LINK reconnect 00:07:10.467 CC app/spdk_nvme_discover/discovery_aer.o 00:07:10.725 CXX test/cpp_headers/cpuset.o 00:07:10.725 LINK spdk_nvme_discover 00:07:10.725 CC examples/sock/hello_world/hello_sock.o 00:07:10.725 CC examples/vmd/lsvmd/lsvmd.o 00:07:10.984 CXX test/cpp_headers/crc16.o 00:07:10.984 CC examples/vmd/led/led.o 00:07:10.984 LINK vhost_fuzz 00:07:10.984 CC app/spdk_top/spdk_top.o 00:07:10.984 CC test/env/mem_callbacks/mem_callbacks.o 00:07:11.241 LINK lsvmd 00:07:11.241 LINK led 00:07:11.241 CXX test/cpp_headers/crc32.o 00:07:11.241 LINK hello_sock 00:07:11.556 LINK mem_callbacks 00:07:11.556 CXX test/cpp_headers/crc64.o 00:07:11.556 CC test/env/vtophys/vtophys.o 00:07:11.556 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:11.833 LINK vtophys 00:07:11.833 CXX test/cpp_headers/dif.o 00:07:11.833 CC examples/nvme/arbitration/arbitration.o 00:07:11.833 CC app/vhost/vhost.o 00:07:12.090 CXX test/cpp_headers/dma.o 00:07:12.090 LINK spdk_top 00:07:12.090 CXX test/cpp_headers/endian.o 00:07:12.090 CC test/event/event_perf/event_perf.o 00:07:12.090 LINK vhost 00:07:12.348 LINK nvme_manage 00:07:12.348 LINK arbitration 00:07:12.348 CXX test/cpp_headers/env.o 00:07:12.348 LINK event_perf 00:07:12.348 CC app/spdk_dd/spdk_dd.o 00:07:12.348 CXX test/cpp_headers/env_dpdk.o 00:07:12.348 CXX test/cpp_headers/event.o 00:07:12.606 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:12.606 CC test/env/memory/memory_ut.o 00:07:12.606 CXX test/cpp_headers/fd.o 00:07:12.606 CXX test/cpp_headers/fd_group.o 00:07:12.606 CC test/lvol/esnap/esnap.o 00:07:12.864 LINK spdk_dd 00:07:12.864 LINK env_dpdk_post_init 00:07:12.864 CXX test/cpp_headers/file.o 00:07:13.122 CXX test/cpp_headers/ftl.o 00:07:13.122 CC test/event/reactor/reactor.o 00:07:13.122 CC app/fio/nvme/fio_plugin.o 00:07:13.122 LINK memory_ut 00:07:13.122 CXX test/cpp_headers/gpt_spec.o 00:07:13.379 LINK reactor 00:07:13.379 CXX test/cpp_headers/hexlify.o 00:07:13.379 CXX test/cpp_headers/histogram_data.o 00:07:13.379 CC test/event/reactor_perf/reactor_perf.o 00:07:13.379 CC examples/nvme/hotplug/hotplug.o 00:07:13.638 LINK reactor_perf 00:07:13.638 CXX test/cpp_headers/idxd.o 00:07:13.638 LINK hotplug 00:07:13.638 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:13.896 LINK spdk_nvme 00:07:13.896 CXX test/cpp_headers/idxd_spec.o 00:07:13.896 CXX test/cpp_headers/init.o 00:07:13.896 LINK cmb_copy 00:07:13.896 CC test/env/pci/pci_ut.o 00:07:14.153 CXX test/cpp_headers/ioat.o 00:07:14.154 CC test/event/app_repeat/app_repeat.o 00:07:14.154 CXX test/cpp_headers/ioat_spec.o 00:07:14.413 CC examples/nvme/abort/abort.o 00:07:14.413 LINK app_repeat 00:07:14.413 CXX test/cpp_headers/iscsi_spec.o 00:07:14.413 LINK pci_ut 00:07:14.671 CXX test/cpp_headers/json.o 00:07:14.671 LINK abort 00:07:14.929 CXX test/cpp_headers/jsonrpc.o 00:07:14.929 CXX test/cpp_headers/likely.o 00:07:14.929 CC app/fio/bdev/fio_plugin.o 00:07:14.929 CC test/rpc_client/rpc_client_test.o 00:07:14.929 CXX test/cpp_headers/log.o 00:07:15.188 CC test/event/scheduler/scheduler.o 00:07:15.188 CC test/nvme/aer/aer.o 00:07:15.188 CXX test/cpp_headers/lvol.o 00:07:15.188 LINK rpc_client_test 00:07:15.446 LINK scheduler 00:07:15.446 CXX test/cpp_headers/memory.o 00:07:15.446 CXX test/cpp_headers/mmio.o 00:07:15.446 CC test/thread/poller_perf/poller_perf.o 00:07:15.446 LINK spdk_bdev 00:07:15.446 LINK aer 00:07:15.705 CXX test/cpp_headers/nbd.o 00:07:15.705 CC test/thread/lock/spdk_lock.o 00:07:15.705 CXX test/cpp_headers/notify.o 00:07:15.705 LINK poller_perf 00:07:15.705 CXX test/cpp_headers/nvme.o 00:07:15.963 CC test/nvme/reset/reset.o 00:07:15.963 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:07:15.963 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:15.963 CXX test/cpp_headers/nvme_intel.o 00:07:16.222 LINK histogram_ut 00:07:16.222 LINK reset 00:07:16.222 LINK pmr_persistence 00:07:16.222 CXX test/cpp_headers/nvme_ocssd.o 00:07:16.480 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:16.480 CXX test/cpp_headers/nvme_spec.o 00:07:16.480 CC test/unit/lib/accel/accel.c/accel_ut.o 00:07:16.738 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:07:16.738 CXX test/cpp_headers/nvme_zns.o 00:07:16.738 CC test/unit/lib/bdev/part.c/part_ut.o 00:07:16.997 CXX test/cpp_headers/nvmf.o 00:07:17.255 CXX test/cpp_headers/nvmf_cmd.o 00:07:17.255 CC examples/nvmf/nvmf/nvmf.o 00:07:17.255 CC test/nvme/sgl/sgl.o 00:07:17.513 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:17.513 LINK spdk_lock 00:07:17.771 CXX test/cpp_headers/nvmf_spec.o 00:07:17.771 LINK nvmf 00:07:17.771 LINK sgl 00:07:18.030 CXX test/cpp_headers/nvmf_transport.o 00:07:18.030 CXX test/cpp_headers/opal.o 00:07:18.288 CC test/nvme/e2edp/nvme_dp.o 00:07:18.288 CXX test/cpp_headers/opal_spec.o 00:07:18.288 LINK esnap 00:07:18.545 CC test/nvme/overhead/overhead.o 00:07:18.545 CXX test/cpp_headers/pci_ids.o 00:07:18.545 LINK nvme_dp 00:07:18.803 CXX test/cpp_headers/pipe.o 00:07:18.803 CC test/nvme/err_injection/err_injection.o 00:07:18.803 LINK overhead 00:07:18.803 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:07:19.064 CXX test/cpp_headers/queue.o 00:07:19.064 CXX test/cpp_headers/reduce.o 00:07:19.064 LINK err_injection 00:07:19.064 LINK accel_ut 00:07:19.064 LINK scsi_nvme_ut 00:07:19.064 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:07:19.325 CXX test/cpp_headers/rpc.o 00:07:19.325 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:07:19.325 CXX test/cpp_headers/scheduler.o 00:07:19.582 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:07:19.582 CXX test/cpp_headers/scsi.o 00:07:19.582 LINK gpt_ut 00:07:19.839 CXX test/cpp_headers/scsi_spec.o 00:07:19.839 CC test/unit/lib/blob/blob.c/blob_ut.o 00:07:20.097 CXX test/cpp_headers/sock.o 00:07:20.097 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:07:20.097 CXX test/cpp_headers/stdinc.o 00:07:20.097 CC test/nvme/startup/startup.o 00:07:20.097 LINK blob_bdev_ut 00:07:20.097 CXX test/cpp_headers/string.o 00:07:20.097 LINK startup 00:07:20.354 CC test/nvme/reserve/reserve.o 00:07:20.354 LINK part_ut 00:07:20.354 CXX test/cpp_headers/thread.o 00:07:20.612 CXX test/cpp_headers/trace.o 00:07:20.612 LINK reserve 00:07:20.612 LINK vbdev_lvol_ut 00:07:20.612 CXX test/cpp_headers/trace_parser.o 00:07:20.910 CC test/nvme/simple_copy/simple_copy.o 00:07:20.910 CXX test/cpp_headers/tree.o 00:07:20.910 CXX test/cpp_headers/ublk.o 00:07:20.910 CXX test/cpp_headers/util.o 00:07:20.910 CC examples/util/zipf/zipf.o 00:07:21.167 CC examples/thread/thread/thread_ex.o 00:07:21.167 LINK simple_copy 00:07:21.167 LINK zipf 00:07:21.167 CXX test/cpp_headers/uuid.o 00:07:21.167 CXX test/cpp_headers/version.o 00:07:21.425 CC test/nvme/connect_stress/connect_stress.o 00:07:21.425 LINK thread 00:07:21.425 CC examples/idxd/perf/perf.o 00:07:21.425 CXX test/cpp_headers/vfio_user_pci.o 00:07:21.425 CXX test/cpp_headers/vfio_user_spec.o 00:07:21.425 LINK connect_stress 00:07:21.681 CXX test/cpp_headers/vhost.o 00:07:21.681 CC test/nvme/boot_partition/boot_partition.o 00:07:21.939 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:21.939 LINK idxd_perf 00:07:21.939 CXX test/cpp_headers/vmd.o 00:07:21.939 LINK boot_partition 00:07:22.196 LINK interrupt_tgt 00:07:22.196 CXX test/cpp_headers/xor.o 00:07:22.196 CXX test/cpp_headers/zipf.o 00:07:22.453 LINK bdev_ut 00:07:22.453 CC test/nvme/compliance/nvme_compliance.o 00:07:22.453 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:22.453 CC test/nvme/fused_ordering/fused_ordering.o 00:07:22.712 CC test/nvme/fdp/fdp.o 00:07:22.970 LINK fused_ordering 00:07:22.970 LINK doorbell_aers 00:07:22.970 LINK nvme_compliance 00:07:22.970 CC test/nvme/cuse/cuse.o 00:07:23.229 LINK fdp 00:07:23.796 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:07:24.055 LINK bdev_ut 00:07:24.055 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:07:24.055 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:07:24.055 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:07:24.055 LINK cuse 00:07:24.346 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:07:24.346 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:07:24.346 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:07:24.346 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:07:24.612 LINK bdev_raid_sb_ut 00:07:24.612 LINK raid1_ut 00:07:24.612 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:07:24.612 LINK tree_ut 00:07:24.612 LINK concat_ut 00:07:24.871 LINK bdev_zone_ut 00:07:24.871 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:07:24.871 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:07:25.129 CC test/unit/lib/dma/dma.c/dma_ut.o 00:07:25.129 LINK blobfs_bdev_ut 00:07:25.129 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:07:25.129 CC test/unit/lib/event/app.c/app_ut.o 00:07:25.129 LINK vbdev_zone_block_ut 00:07:25.388 LINK raid5f_ut 00:07:25.646 LINK dma_ut 00:07:25.646 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:07:25.646 LINK ioat_ut 00:07:25.646 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:07:25.906 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:07:25.906 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:07:25.906 LINK app_ut 00:07:25.906 LINK blobfs_async_ut 00:07:26.165 LINK bdev_raid_ut 00:07:26.165 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:07:26.165 LINK init_grp_ut 00:07:26.165 LINK blobfs_sync_ut 00:07:26.423 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:07:26.423 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:07:26.423 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:07:26.423 CC test/unit/lib/log/log.c/log_ut.o 00:07:26.689 LINK conn_ut 00:07:26.689 LINK jsonrpc_server_ut 00:07:26.689 LINK reactor_ut 00:07:26.946 CC test/unit/lib/notify/notify.c/notify_ut.o 00:07:26.946 LINK log_ut 00:07:27.204 CC test/unit/lib/iscsi/param.c/param_ut.o 00:07:27.204 LINK notify_ut 00:07:27.204 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:07:27.204 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:07:27.463 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:07:27.722 LINK param_ut 00:07:27.722 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:07:27.722 LINK blob_ut 00:07:27.981 LINK dev_ut 00:07:27.981 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:07:28.240 LINK scsi_ut 00:07:28.240 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:07:28.240 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:07:28.499 LINK lvol_ut 00:07:28.499 LINK lun_ut 00:07:28.758 CC test/unit/lib/sock/sock.c/sock_ut.o 00:07:29.017 LINK scsi_pr_ut 00:07:29.017 CC test/unit/lib/sock/posix.c/posix_ut.o 00:07:29.017 LINK nvme_ut 00:07:29.275 CC test/unit/lib/thread/thread.c/thread_ut.o 00:07:29.534 LINK iscsi_ut 00:07:29.534 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:07:29.792 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:07:29.792 LINK json_parse_ut 00:07:30.050 LINK posix_ut 00:07:30.050 LINK scsi_bdev_ut 00:07:30.050 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:07:30.308 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:07:30.567 LINK sock_ut 00:07:30.567 CC test/unit/lib/util/base64.c/base64_ut.o 00:07:30.567 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:07:30.825 LINK iobuf_ut 00:07:30.826 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:07:30.826 LINK base64_ut 00:07:30.826 LINK portal_grp_ut 00:07:31.084 LINK pci_event_ut 00:07:31.084 LINK json_util_ut 00:07:31.084 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:07:31.084 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:07:31.342 LINK bit_array_ut 00:07:31.342 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:07:31.342 LINK cpuset_ut 00:07:31.342 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:07:31.600 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:07:31.600 LINK tcp_ut 00:07:31.600 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:07:31.600 LINK bdev_nvme_ut 00:07:31.600 LINK subsystem_ut 00:07:31.600 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:07:31.858 LINK thread_ut 00:07:31.858 LINK crc16_ut 00:07:32.117 LINK rpc_ut 00:07:32.117 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:07:32.117 LINK tgt_node_ut 00:07:32.117 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:07:32.117 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:07:32.117 LINK idxd_user_ut 00:07:32.117 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:07:32.117 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:07:32.375 LINK json_write_ut 00:07:32.375 LINK crc32_ieee_ut 00:07:32.375 LINK crc32c_ut 00:07:32.375 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:07:32.375 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:07:32.375 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:07:32.634 CC test/unit/lib/rdma/common.c/common_ut.o 00:07:32.634 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:07:32.634 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:07:32.634 LINK crc64_ut 00:07:32.893 LINK ftl_l2p_ut 00:07:32.893 LINK ftl_bitmap_ut 00:07:32.893 LINK idxd_ut 00:07:32.893 CC test/unit/lib/util/dif.c/dif_ut.o 00:07:33.151 LINK common_ut 00:07:33.151 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:07:33.151 LINK ftl_io_ut 00:07:33.151 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:07:33.409 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:07:33.409 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:07:33.409 LINK nvme_ctrlr_ut 00:07:33.409 LINK ftl_mempool_ut 00:07:33.667 CC test/unit/lib/util/iov.c/iov_ut.o 00:07:33.667 LINK ftl_band_ut 00:07:33.667 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:07:33.925 LINK ftl_mngt_ut 00:07:33.925 CC test/unit/lib/util/math.c/math_ut.o 00:07:33.925 LINK iov_ut 00:07:34.184 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:07:34.184 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:07:34.184 LINK math_ut 00:07:34.184 LINK dif_ut 00:07:34.184 LINK vhost_ut 00:07:34.184 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:07:34.442 CC test/unit/lib/util/string.c/string_ut.o 00:07:34.442 CC test/unit/lib/util/xor.c/xor_ut.o 00:07:34.699 LINK string_ut 00:07:34.700 LINK ftl_layout_upgrade_ut 00:07:34.700 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:07:34.700 LINK pipe_ut 00:07:34.700 LINK ftl_sb_ut 00:07:34.958 LINK xor_ut 00:07:34.958 LINK nvme_ns_ut 00:07:34.958 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:07:34.958 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:07:35.217 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:07:35.217 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:07:35.217 LINK nvme_ctrlr_ocssd_cmd_ut 00:07:35.217 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:07:35.217 LINK nvme_ctrlr_cmd_ut 00:07:35.217 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:07:35.476 LINK ctrlr_ut 00:07:35.476 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:07:35.734 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:07:35.734 LINK nvme_quirks_ut 00:07:35.993 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:07:35.993 LINK nvme_poll_group_ut 00:07:35.993 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:07:36.251 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:07:36.251 LINK nvme_qpair_ut 00:07:36.251 LINK nvme_io_msg_ut 00:07:36.510 LINK nvme_transport_ut 00:07:36.510 LINK nvme_ns_ocssd_cmd_ut 00:07:36.510 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:07:36.768 LINK nvme_ns_cmd_ut 00:07:36.768 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:07:36.768 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:07:36.768 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:07:36.768 LINK nvme_pcie_ut 00:07:37.026 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:07:37.026 LINK nvme_fabric_ut 00:07:37.284 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:07:37.284 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:07:37.542 LINK nvme_opal_ut 00:07:37.542 LINK nvme_pcie_common_ut 00:07:37.800 LINK nvme_tcp_ut 00:07:37.800 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:07:38.060 LINK ctrlr_bdev_ut 00:07:38.319 LINK nvme_cuse_ut 00:07:38.319 LINK subsystem_ut 00:07:38.578 LINK nvmf_ut 00:07:38.578 LINK nvme_rdma_ut 00:07:38.836 LINK ctrlr_discovery_ut 00:07:41.378 LINK transport_ut 00:07:41.378 LINK rdma_ut 00:07:41.378 00:07:41.378 real 1m43.661s 00:07:41.378 user 8m52.428s 00:07:41.379 sys 1m33.480s 00:07:41.379 11:51:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:07:41.379 11:51:46 -- common/autotest_common.sh@10 -- $ set +x 00:07:41.379 ************************************ 00:07:41.379 END TEST unittest_build 00:07:41.379 ************************************ 00:07:41.379 11:51:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:41.379 11:51:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:41.379 11:51:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:41.379 11:51:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:41.379 11:51:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:41.379 11:51:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:41.379 11:51:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:41.379 11:51:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:41.379 11:51:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:41.379 11:51:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.379 11:51:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:41.379 11:51:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:41.379 11:51:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:41.379 11:51:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:41.379 11:51:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:41.379 11:51:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:41.379 11:51:46 -- scripts/common.sh@344 -- # : 1 00:07:41.379 11:51:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:41.379 11:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.379 11:51:46 -- scripts/common.sh@364 -- # decimal 1 00:07:41.379 11:51:46 -- scripts/common.sh@352 -- # local d=1 00:07:41.379 11:51:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.379 11:51:46 -- scripts/common.sh@354 -- # echo 1 00:07:41.637 11:51:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:41.637 11:51:46 -- scripts/common.sh@365 -- # decimal 2 00:07:41.637 11:51:46 -- scripts/common.sh@352 -- # local d=2 00:07:41.637 11:51:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.637 11:51:46 -- scripts/common.sh@354 -- # echo 2 00:07:41.637 11:51:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:41.637 11:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:41.637 11:51:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:41.637 11:51:46 -- scripts/common.sh@367 -- # return 0 00:07:41.637 11:51:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.637 11:51:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.637 --rc genhtml_branch_coverage=1 00:07:41.637 --rc genhtml_function_coverage=1 00:07:41.637 --rc genhtml_legend=1 00:07:41.637 --rc geninfo_all_blocks=1 00:07:41.637 --rc geninfo_unexecuted_blocks=1 00:07:41.637 00:07:41.637 ' 00:07:41.637 11:51:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.637 --rc genhtml_branch_coverage=1 00:07:41.637 --rc genhtml_function_coverage=1 00:07:41.637 --rc genhtml_legend=1 00:07:41.637 --rc geninfo_all_blocks=1 00:07:41.637 --rc geninfo_unexecuted_blocks=1 00:07:41.637 00:07:41.637 ' 00:07:41.637 11:51:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.637 --rc genhtml_branch_coverage=1 00:07:41.637 --rc genhtml_function_coverage=1 00:07:41.637 --rc genhtml_legend=1 00:07:41.637 --rc geninfo_all_blocks=1 00:07:41.637 --rc geninfo_unexecuted_blocks=1 00:07:41.637 00:07:41.637 ' 00:07:41.637 11:51:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.637 --rc genhtml_branch_coverage=1 00:07:41.637 --rc genhtml_function_coverage=1 00:07:41.637 --rc genhtml_legend=1 00:07:41.637 --rc geninfo_all_blocks=1 00:07:41.637 --rc geninfo_unexecuted_blocks=1 00:07:41.637 00:07:41.637 ' 00:07:41.637 11:51:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:41.637 11:51:46 -- nvmf/common.sh@7 -- # uname -s 00:07:41.637 11:51:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.637 11:51:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.637 11:51:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.637 11:51:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.638 11:51:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.638 11:51:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.638 11:51:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.638 11:51:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.638 11:51:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.638 11:51:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.638 11:51:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684f0e64-c129-4537-b465-1a4cc818ee04 00:07:41.638 11:51:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=684f0e64-c129-4537-b465-1a4cc818ee04 00:07:41.638 11:51:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.638 11:51:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.638 11:51:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:41.638 11:51:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.638 11:51:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.638 11:51:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.638 11:51:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.638 11:51:46 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.638 11:51:46 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.638 11:51:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.638 11:51:46 -- paths/export.sh@5 -- # export PATH 00:07:41.638 11:51:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.638 11:51:46 -- nvmf/common.sh@46 -- # : 0 00:07:41.638 11:51:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:41.638 11:51:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:41.638 11:51:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:41.638 11:51:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.638 11:51:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.638 11:51:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:41.638 11:51:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:41.638 11:51:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:41.638 11:51:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:41.638 11:51:46 -- spdk/autotest.sh@32 -- # uname -s 00:07:41.638 11:51:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:41.638 11:51:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:07:41.638 11:51:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:41.638 11:51:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:41.638 11:51:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:41.638 11:51:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:41.638 11:51:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:41.638 11:51:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:07:41.638 11:51:46 -- spdk/autotest.sh@48 -- # udevadm_pid=104424 00:07:41.638 11:51:46 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:07:41.638 11:51:46 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:07:41.638 11:51:46 -- spdk/autotest.sh@54 -- # echo 104433 00:07:41.638 11:51:46 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:41.638 11:51:46 -- spdk/autotest.sh@56 -- # echo 104436 00:07:41.638 11:51:46 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:41.638 11:51:46 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:07:41.638 11:51:46 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:41.638 11:51:46 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:07:41.638 11:51:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.638 11:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.638 11:51:46 -- spdk/autotest.sh@70 -- # create_test_list 00:07:41.638 11:51:46 -- common/autotest_common.sh@746 -- # xtrace_disable 00:07:41.638 11:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.638 11:51:47 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:41.638 11:51:47 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:41.638 11:51:47 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:07:41.638 11:51:47 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:41.638 11:51:47 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:07:41.638 11:51:47 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:07:41.638 11:51:47 -- common/autotest_common.sh@1450 -- # uname 00:07:41.638 11:51:47 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:07:41.638 11:51:47 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:07:41.638 11:51:47 -- common/autotest_common.sh@1470 -- # uname 00:07:41.638 11:51:47 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:07:41.638 11:51:47 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:07:41.638 11:51:47 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:41.638 lcov: LCOV version 1.15 00:07:41.638 11:51:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:59.720 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:59.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:59.720 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:59.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:59.720 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:59.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:08:31.811 11:52:36 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:08:31.811 11:52:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.811 11:52:36 -- common/autotest_common.sh@10 -- # set +x 00:08:31.811 11:52:36 -- spdk/autotest.sh@89 -- # rm -f 00:08:31.811 11:52:36 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:31.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:31.811 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:08:31.811 11:52:36 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:08:31.811 11:52:36 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:31.811 11:52:36 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:31.811 11:52:36 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:31.811 11:52:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:31.811 11:52:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:31.811 11:52:36 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:31.811 11:52:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:31.811 11:52:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:31.811 11:52:36 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:08:31.811 11:52:36 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:08:31.811 11:52:36 -- spdk/autotest.sh@108 -- # grep -v p 00:08:31.811 11:52:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:31.811 11:52:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:31.811 11:52:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:08:31.811 11:52:36 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:08:31.811 11:52:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:31.811 No valid GPT data, bailing 00:08:31.811 11:52:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:31.811 11:52:36 -- scripts/common.sh@393 -- # pt= 00:08:31.811 11:52:36 -- scripts/common.sh@394 -- # return 1 00:08:31.811 11:52:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:31.811 1+0 records in 00:08:31.811 1+0 records out 00:08:31.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405224 s, 259 MB/s 00:08:31.811 11:52:36 -- spdk/autotest.sh@116 -- # sync 00:08:31.811 11:52:36 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:31.811 11:52:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:31.812 11:52:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:32.749 11:52:37 -- spdk/autotest.sh@122 -- # uname -s 00:08:32.749 11:52:37 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:08:32.749 11:52:37 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:32.749 11:52:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.749 11:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.749 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.749 ************************************ 00:08:32.749 START TEST setup.sh 00:08:32.749 ************************************ 00:08:32.749 11:52:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:32.749 * Looking for test storage... 00:08:32.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:32.749 11:52:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:32.749 11:52:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:32.749 11:52:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:32.749 11:52:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:32.749 11:52:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:32.749 11:52:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:32.749 11:52:38 -- scripts/common.sh@335 -- # IFS=.-: 00:08:32.749 11:52:38 -- scripts/common.sh@335 -- # read -ra ver1 00:08:32.749 11:52:38 -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.749 11:52:38 -- scripts/common.sh@336 -- # read -ra ver2 00:08:32.749 11:52:38 -- scripts/common.sh@337 -- # local 'op=<' 00:08:32.749 11:52:38 -- scripts/common.sh@339 -- # ver1_l=2 00:08:32.749 11:52:38 -- scripts/common.sh@340 -- # ver2_l=1 00:08:32.749 11:52:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:32.749 11:52:38 -- scripts/common.sh@343 -- # case "$op" in 00:08:32.749 11:52:38 -- scripts/common.sh@344 -- # : 1 00:08:32.749 11:52:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:32.749 11:52:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.749 11:52:38 -- scripts/common.sh@364 -- # decimal 1 00:08:32.749 11:52:38 -- scripts/common.sh@352 -- # local d=1 00:08:32.749 11:52:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.749 11:52:38 -- scripts/common.sh@354 -- # echo 1 00:08:32.749 11:52:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:32.749 11:52:38 -- scripts/common.sh@365 -- # decimal 2 00:08:32.749 11:52:38 -- scripts/common.sh@352 -- # local d=2 00:08:32.749 11:52:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.749 11:52:38 -- scripts/common.sh@354 -- # echo 2 00:08:32.749 11:52:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:32.749 11:52:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:32.749 11:52:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:32.749 11:52:38 -- scripts/common.sh@367 -- # return 0 00:08:32.749 11:52:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.749 --rc genhtml_branch_coverage=1 00:08:32.749 --rc genhtml_function_coverage=1 00:08:32.749 --rc genhtml_legend=1 00:08:32.749 --rc geninfo_all_blocks=1 00:08:32.749 --rc geninfo_unexecuted_blocks=1 00:08:32.749 00:08:32.749 ' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.749 --rc genhtml_branch_coverage=1 00:08:32.749 --rc genhtml_function_coverage=1 00:08:32.749 --rc genhtml_legend=1 00:08:32.749 --rc geninfo_all_blocks=1 00:08:32.749 --rc geninfo_unexecuted_blocks=1 00:08:32.749 00:08:32.749 ' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.749 --rc genhtml_branch_coverage=1 00:08:32.749 --rc genhtml_function_coverage=1 00:08:32.749 --rc genhtml_legend=1 00:08:32.749 --rc geninfo_all_blocks=1 00:08:32.749 --rc geninfo_unexecuted_blocks=1 00:08:32.749 00:08:32.749 ' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.749 --rc genhtml_branch_coverage=1 00:08:32.749 --rc genhtml_function_coverage=1 00:08:32.749 --rc genhtml_legend=1 00:08:32.749 --rc geninfo_all_blocks=1 00:08:32.749 --rc geninfo_unexecuted_blocks=1 00:08:32.749 00:08:32.749 ' 00:08:32.749 11:52:38 -- setup/test-setup.sh@10 -- # uname -s 00:08:32.749 11:52:38 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:32.749 11:52:38 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:32.749 11:52:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.749 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:08:32.749 ************************************ 00:08:32.749 START TEST acl 00:08:32.749 ************************************ 00:08:32.749 11:52:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:32.749 * Looking for test storage... 00:08:32.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:32.749 11:52:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:32.749 11:52:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:32.749 11:52:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:33.007 11:52:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:33.007 11:52:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:33.007 11:52:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:33.007 11:52:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:33.007 11:52:38 -- scripts/common.sh@335 -- # IFS=.-: 00:08:33.007 11:52:38 -- scripts/common.sh@335 -- # read -ra ver1 00:08:33.007 11:52:38 -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.007 11:52:38 -- scripts/common.sh@336 -- # read -ra ver2 00:08:33.007 11:52:38 -- scripts/common.sh@337 -- # local 'op=<' 00:08:33.007 11:52:38 -- scripts/common.sh@339 -- # ver1_l=2 00:08:33.007 11:52:38 -- scripts/common.sh@340 -- # ver2_l=1 00:08:33.007 11:52:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:33.007 11:52:38 -- scripts/common.sh@343 -- # case "$op" in 00:08:33.007 11:52:38 -- scripts/common.sh@344 -- # : 1 00:08:33.007 11:52:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:33.007 11:52:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.007 11:52:38 -- scripts/common.sh@364 -- # decimal 1 00:08:33.007 11:52:38 -- scripts/common.sh@352 -- # local d=1 00:08:33.007 11:52:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.007 11:52:38 -- scripts/common.sh@354 -- # echo 1 00:08:33.007 11:52:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:33.007 11:52:38 -- scripts/common.sh@365 -- # decimal 2 00:08:33.007 11:52:38 -- scripts/common.sh@352 -- # local d=2 00:08:33.007 11:52:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.007 11:52:38 -- scripts/common.sh@354 -- # echo 2 00:08:33.007 11:52:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:33.007 11:52:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:33.007 11:52:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:33.007 11:52:38 -- scripts/common.sh@367 -- # return 0 00:08:33.008 11:52:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.008 11:52:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:33.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.008 --rc genhtml_branch_coverage=1 00:08:33.008 --rc genhtml_function_coverage=1 00:08:33.008 --rc genhtml_legend=1 00:08:33.008 --rc geninfo_all_blocks=1 00:08:33.008 --rc geninfo_unexecuted_blocks=1 00:08:33.008 00:08:33.008 ' 00:08:33.008 11:52:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:33.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.008 --rc genhtml_branch_coverage=1 00:08:33.008 --rc genhtml_function_coverage=1 00:08:33.008 --rc genhtml_legend=1 00:08:33.008 --rc geninfo_all_blocks=1 00:08:33.008 --rc geninfo_unexecuted_blocks=1 00:08:33.008 00:08:33.008 ' 00:08:33.008 11:52:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:33.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.008 --rc genhtml_branch_coverage=1 00:08:33.008 --rc genhtml_function_coverage=1 00:08:33.008 --rc genhtml_legend=1 00:08:33.008 --rc geninfo_all_blocks=1 00:08:33.008 --rc geninfo_unexecuted_blocks=1 00:08:33.008 00:08:33.008 ' 00:08:33.008 11:52:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:33.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.008 --rc genhtml_branch_coverage=1 00:08:33.008 --rc genhtml_function_coverage=1 00:08:33.008 --rc genhtml_legend=1 00:08:33.008 --rc geninfo_all_blocks=1 00:08:33.008 --rc geninfo_unexecuted_blocks=1 00:08:33.008 00:08:33.008 ' 00:08:33.008 11:52:38 -- setup/acl.sh@10 -- # get_zoned_devs 00:08:33.008 11:52:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:33.008 11:52:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:33.008 11:52:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:33.008 11:52:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.008 11:52:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:33.008 11:52:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:33.008 11:52:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:33.008 11:52:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.008 11:52:38 -- setup/acl.sh@12 -- # devs=() 00:08:33.008 11:52:38 -- setup/acl.sh@12 -- # declare -a devs 00:08:33.008 11:52:38 -- setup/acl.sh@13 -- # drivers=() 00:08:33.008 11:52:38 -- setup/acl.sh@13 -- # declare -A drivers 00:08:33.008 11:52:38 -- setup/acl.sh@51 -- # setup reset 00:08:33.008 11:52:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:33.008 11:52:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:33.590 11:52:38 -- setup/acl.sh@52 -- # collect_setup_devs 00:08:33.590 11:52:38 -- setup/acl.sh@16 -- # local dev driver 00:08:33.590 11:52:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:33.590 11:52:38 -- setup/acl.sh@15 -- # setup output status 00:08:33.590 11:52:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:33.590 11:52:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:33.590 Hugepages 00:08:33.590 node hugesize free / total 00:08:33.590 11:52:38 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:33.590 11:52:38 -- setup/acl.sh@19 -- # continue 00:08:33.590 11:52:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:33.590 00:08:33.590 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:33.590 11:52:38 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:33.590 11:52:38 -- setup/acl.sh@19 -- # continue 00:08:33.590 11:52:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:33.590 11:52:39 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:33.590 11:52:39 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:33.590 11:52:39 -- setup/acl.sh@20 -- # continue 00:08:33.590 11:52:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:33.849 11:52:39 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:08:33.849 11:52:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:33.849 11:52:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:33.849 11:52:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:33.849 11:52:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:33.849 11:52:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:33.849 11:52:39 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:08:33.849 11:52:39 -- setup/acl.sh@54 -- # run_test denied denied 00:08:33.849 11:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.849 11:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.849 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:33.849 ************************************ 00:08:33.849 START TEST denied 00:08:33.849 ************************************ 00:08:33.849 11:52:39 -- common/autotest_common.sh@1114 -- # denied 00:08:33.849 11:52:39 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:08:33.849 11:52:39 -- setup/acl.sh@38 -- # setup output config 00:08:33.849 11:52:39 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:08:33.849 11:52:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:33.849 11:52:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:35.228 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:08:35.228 11:52:40 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:08:35.228 11:52:40 -- setup/acl.sh@28 -- # local dev driver 00:08:35.228 11:52:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:35.228 11:52:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:08:35.228 11:52:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:08:35.228 11:52:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:35.228 11:52:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:35.228 11:52:40 -- setup/acl.sh@41 -- # setup reset 00:08:35.228 11:52:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:35.228 11:52:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:35.796 00:08:35.796 real 0m1.928s 00:08:35.796 user 0m0.462s 00:08:35.796 sys 0m1.507s 00:08:35.796 11:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.796 11:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.796 ************************************ 00:08:35.796 END TEST denied 00:08:35.796 ************************************ 00:08:35.796 11:52:41 -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:35.796 11:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.796 11:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.796 11:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.796 ************************************ 00:08:35.796 START TEST allowed 00:08:35.796 ************************************ 00:08:35.796 11:52:41 -- common/autotest_common.sh@1114 -- # allowed 00:08:35.796 11:52:41 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:08:35.796 11:52:41 -- setup/acl.sh@45 -- # setup output config 00:08:35.796 11:52:41 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:08:35.796 11:52:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:35.796 11:52:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:37.698 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.698 11:52:42 -- setup/acl.sh@47 -- # verify 00:08:37.698 11:52:42 -- setup/acl.sh@28 -- # local dev driver 00:08:37.699 11:52:42 -- setup/acl.sh@48 -- # setup reset 00:08:37.699 11:52:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:37.699 11:52:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:37.957 00:08:37.957 real 0m2.111s 00:08:37.957 user 0m0.447s 00:08:37.957 sys 0m1.622s 00:08:37.957 11:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.957 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.957 ************************************ 00:08:37.957 END TEST allowed 00:08:37.957 ************************************ 00:08:37.957 00:08:37.957 real 0m5.072s 00:08:37.957 user 0m1.592s 00:08:37.957 sys 0m3.535s 00:08:37.957 11:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.957 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.957 ************************************ 00:08:37.957 END TEST acl 00:08:37.957 ************************************ 00:08:37.957 11:52:43 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:37.957 11:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.957 11:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.957 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.957 ************************************ 00:08:37.957 START TEST hugepages 00:08:37.957 ************************************ 00:08:37.957 11:52:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:37.957 * Looking for test storage... 00:08:37.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:37.957 11:52:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.957 11:52:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.957 11:52:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.957 11:52:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.957 11:52:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.957 11:52:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.957 11:52:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.957 11:52:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.957 11:52:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.957 11:52:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.958 11:52:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.958 11:52:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.958 11:52:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.958 11:52:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.958 11:52:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.958 11:52:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.958 11:52:43 -- scripts/common.sh@344 -- # : 1 00:08:37.958 11:52:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.958 11:52:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.958 11:52:43 -- scripts/common.sh@364 -- # decimal 1 00:08:38.217 11:52:43 -- scripts/common.sh@352 -- # local d=1 00:08:38.217 11:52:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.217 11:52:43 -- scripts/common.sh@354 -- # echo 1 00:08:38.217 11:52:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.217 11:52:43 -- scripts/common.sh@365 -- # decimal 2 00:08:38.217 11:52:43 -- scripts/common.sh@352 -- # local d=2 00:08:38.217 11:52:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.217 11:52:43 -- scripts/common.sh@354 -- # echo 2 00:08:38.217 11:52:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.217 11:52:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.217 11:52:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.217 11:52:43 -- scripts/common.sh@367 -- # return 0 00:08:38.217 11:52:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.217 11:52:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.217 --rc genhtml_branch_coverage=1 00:08:38.217 --rc genhtml_function_coverage=1 00:08:38.217 --rc genhtml_legend=1 00:08:38.217 --rc geninfo_all_blocks=1 00:08:38.217 --rc geninfo_unexecuted_blocks=1 00:08:38.217 00:08:38.217 ' 00:08:38.217 11:52:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.217 --rc genhtml_branch_coverage=1 00:08:38.217 --rc genhtml_function_coverage=1 00:08:38.217 --rc genhtml_legend=1 00:08:38.217 --rc geninfo_all_blocks=1 00:08:38.217 --rc geninfo_unexecuted_blocks=1 00:08:38.217 00:08:38.217 ' 00:08:38.217 11:52:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.217 --rc genhtml_branch_coverage=1 00:08:38.217 --rc genhtml_function_coverage=1 00:08:38.217 --rc genhtml_legend=1 00:08:38.217 --rc geninfo_all_blocks=1 00:08:38.217 --rc geninfo_unexecuted_blocks=1 00:08:38.217 00:08:38.217 ' 00:08:38.217 11:52:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.217 --rc genhtml_branch_coverage=1 00:08:38.217 --rc genhtml_function_coverage=1 00:08:38.217 --rc genhtml_legend=1 00:08:38.217 --rc geninfo_all_blocks=1 00:08:38.217 --rc geninfo_unexecuted_blocks=1 00:08:38.217 00:08:38.217 ' 00:08:38.217 11:52:43 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:38.217 11:52:43 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:38.217 11:52:43 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:38.217 11:52:43 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:38.217 11:52:43 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:38.217 11:52:43 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:38.217 11:52:43 -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:38.217 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:38.217 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:38.217 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.217 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.217 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:38.217 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:38.217 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.217 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 1442640 kB' 'MemAvailable: 7371944 kB' 'Buffers: 40196 kB' 'Cached: 5969748 kB' 'SwapCached: 0 kB' 'Active: 1591436 kB' 'Inactive: 4553060 kB' 'Active(anon): 1092 kB' 'Inactive(anon): 145132 kB' 'Active(file): 1590344 kB' 'Inactive(file): 4407928 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 416 kB' 'Writeback: 0 kB' 'AnonPages: 163820 kB' 'Mapped: 73640 kB' 'Shmem: 2608 kB' 'KReclaimable: 252548 kB' 'Slab: 323668 kB' 'SReclaimable: 252548 kB' 'SUnreclaim: 71120 kB' 'KernelStack: 4576 kB' 'PageTables: 4020 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024324 kB' 'Committed_AS: 535792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19588 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.217 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.217 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.218 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.218 11:52:43 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:38.218 11:52:43 -- setup/common.sh@33 -- # echo 2048 00:08:38.218 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:38.218 11:52:43 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:38.218 11:52:43 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:38.218 11:52:43 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:38.218 11:52:43 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:38.218 11:52:43 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:38.218 11:52:43 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:38.218 11:52:43 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:38.219 11:52:43 -- setup/hugepages.sh@207 -- # get_nodes 00:08:38.219 11:52:43 -- setup/hugepages.sh@27 -- # local node 00:08:38.219 11:52:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:38.219 11:52:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:38.219 11:52:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:38.219 11:52:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:38.219 11:52:43 -- setup/hugepages.sh@208 -- # clear_hp 00:08:38.219 11:52:43 -- setup/hugepages.sh@37 -- # local node hp 00:08:38.219 11:52:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:38.219 11:52:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:38.219 11:52:43 -- setup/hugepages.sh@41 -- # echo 0 00:08:38.219 11:52:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:38.219 11:52:43 -- setup/hugepages.sh@41 -- # echo 0 00:08:38.219 11:52:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:38.219 11:52:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:38.219 11:52:43 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:38.219 11:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.219 11:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.219 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.219 ************************************ 00:08:38.219 START TEST default_setup 00:08:38.219 ************************************ 00:08:38.219 11:52:43 -- common/autotest_common.sh@1114 -- # default_setup 00:08:38.219 11:52:43 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:38.219 11:52:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:38.219 11:52:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:38.219 11:52:43 -- setup/hugepages.sh@51 -- # shift 00:08:38.219 11:52:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:38.219 11:52:43 -- setup/hugepages.sh@52 -- # local node_ids 00:08:38.219 11:52:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:38.219 11:52:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:38.219 11:52:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:38.219 11:52:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:38.219 11:52:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:38.219 11:52:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:38.219 11:52:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:38.219 11:52:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:38.219 11:52:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:38.219 11:52:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:38.219 11:52:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:38.219 11:52:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:38.219 11:52:43 -- setup/hugepages.sh@73 -- # return 0 00:08:38.219 11:52:43 -- setup/hugepages.sh@137 -- # setup output 00:08:38.219 11:52:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.219 11:52:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:38.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:38.801 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.384 11:52:44 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:39.384 11:52:44 -- setup/hugepages.sh@89 -- # local node 00:08:39.384 11:52:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:39.384 11:52:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:39.384 11:52:44 -- setup/hugepages.sh@92 -- # local surp 00:08:39.384 11:52:44 -- setup/hugepages.sh@93 -- # local resv 00:08:39.384 11:52:44 -- setup/hugepages.sh@94 -- # local anon 00:08:39.384 11:52:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:39.384 11:52:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:39.384 11:52:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:39.384 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.385 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.385 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.385 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.385 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.385 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.385 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.385 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3540688 kB' 'MemAvailable: 9469948 kB' 'Buffers: 40196 kB' 'Cached: 5969752 kB' 'SwapCached: 0 kB' 'Active: 1591464 kB' 'Inactive: 4554576 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 146672 kB' 'Active(file): 1590376 kB' 'Inactive(file): 4407904 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 416 kB' 'Writeback: 0 kB' 'AnonPages: 165352 kB' 'Mapped: 73432 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323744 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71248 kB' 'KernelStack: 4480 kB' 'PageTables: 3804 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.385 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.385 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.386 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.386 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.386 11:52:44 -- setup/hugepages.sh@97 -- # anon=0 00:08:39.386 11:52:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:39.386 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.386 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.386 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.386 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.386 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.386 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.386 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.386 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.386 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3540688 kB' 'MemAvailable: 9469948 kB' 'Buffers: 40196 kB' 'Cached: 5969752 kB' 'SwapCached: 0 kB' 'Active: 1591464 kB' 'Inactive: 4554796 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 146892 kB' 'Active(file): 1590376 kB' 'Inactive(file): 4407904 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 416 kB' 'Writeback: 0 kB' 'AnonPages: 165572 kB' 'Mapped: 73432 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323744 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71248 kB' 'KernelStack: 4464 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.386 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.386 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.387 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.387 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.387 11:52:44 -- setup/hugepages.sh@99 -- # surp=0 00:08:39.387 11:52:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:39.387 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:39.387 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.387 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.387 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.387 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.387 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.387 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.387 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.387 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3540944 kB' 'MemAvailable: 9470204 kB' 'Buffers: 40196 kB' 'Cached: 5969752 kB' 'SwapCached: 0 kB' 'Active: 1591464 kB' 'Inactive: 4555056 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 147152 kB' 'Active(file): 1590376 kB' 'Inactive(file): 4407904 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 416 kB' 'Writeback: 0 kB' 'AnonPages: 165572 kB' 'Mapped: 73432 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323744 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71248 kB' 'KernelStack: 4464 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19572 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.387 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.387 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.388 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.388 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.388 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.388 nr_hugepages=1024 00:08:39.388 resv_hugepages=0 00:08:39.388 surplus_hugepages=0 00:08:39.388 anon_hugepages=0 00:08:39.388 11:52:44 -- setup/hugepages.sh@100 -- # resv=0 00:08:39.388 11:52:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:39.388 11:52:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:39.388 11:52:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:39.388 11:52:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:39.388 11:52:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:39.388 11:52:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:39.388 11:52:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:39.388 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:39.388 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.388 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.388 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.388 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.388 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.388 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.388 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.388 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.388 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3541720 kB' 'MemAvailable: 9470980 kB' 'Buffers: 40196 kB' 'Cached: 5969752 kB' 'SwapCached: 0 kB' 'Active: 1591456 kB' 'Inactive: 4554664 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146760 kB' 'Active(file): 1590376 kB' 'Inactive(file): 4407904 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 416 kB' 'Writeback: 0 kB' 'AnonPages: 165160 kB' 'Mapped: 73432 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323744 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71248 kB' 'KernelStack: 4500 kB' 'PageTables: 3960 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19588 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.389 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.389 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.390 11:52:44 -- setup/common.sh@33 -- # echo 1024 00:08:39.390 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.390 11:52:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:39.390 11:52:44 -- setup/hugepages.sh@112 -- # get_nodes 00:08:39.390 11:52:44 -- setup/hugepages.sh@27 -- # local node 00:08:39.390 11:52:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:39.390 11:52:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:39.390 11:52:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:39.390 11:52:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:39.390 11:52:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:39.390 11:52:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:39.390 11:52:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:39.390 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.390 11:52:44 -- setup/common.sh@18 -- # local node=0 00:08:39.390 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.390 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.390 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.390 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:39.390 11:52:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:39.390 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.390 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3542760 kB' 'MemUsed: 8700196 kB' 'SwapCached: 0 kB' 'Active: 1591464 kB' 'Inactive: 4554612 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 146708 kB' 'Active(file): 1590376 kB' 'Inactive(file): 4407904 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 416 kB' 'Writeback: 0 kB' 'FilePages: 6009948 kB' 'Mapped: 73432 kB' 'AnonPages: 165132 kB' 'Shmem: 2604 kB' 'KernelStack: 4552 kB' 'PageTables: 3924 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323744 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.390 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.390 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.391 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.391 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.391 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.391 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.391 node0=1024 expecting 1024 00:08:39.391 11:52:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:39.391 11:52:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:39.391 11:52:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:39.391 11:52:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:39.391 11:52:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:39.391 11:52:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:39.391 00:08:39.391 real 0m1.223s 00:08:39.391 user 0m0.362s 00:08:39.391 sys 0m0.844s 00:08:39.391 11:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.391 ************************************ 00:08:39.391 END TEST default_setup 00:08:39.391 11:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.391 ************************************ 00:08:39.391 11:52:44 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:39.391 11:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:39.391 11:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.391 11:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.391 ************************************ 00:08:39.391 START TEST per_node_1G_alloc 00:08:39.391 ************************************ 00:08:39.391 11:52:44 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:08:39.391 11:52:44 -- setup/hugepages.sh@143 -- # local IFS=, 00:08:39.391 11:52:44 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:39.391 11:52:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:39.391 11:52:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:39.391 11:52:44 -- setup/hugepages.sh@51 -- # shift 00:08:39.391 11:52:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:39.391 11:52:44 -- setup/hugepages.sh@52 -- # local node_ids 00:08:39.391 11:52:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:39.391 11:52:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:39.391 11:52:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:39.391 11:52:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:39.391 11:52:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:39.391 11:52:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:39.391 11:52:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:39.391 11:52:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:39.391 11:52:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:39.391 11:52:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:39.391 11:52:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:39.391 11:52:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:39.391 11:52:44 -- setup/hugepages.sh@73 -- # return 0 00:08:39.391 11:52:44 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:39.391 11:52:44 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:39.391 11:52:44 -- setup/hugepages.sh@146 -- # setup output 00:08:39.391 11:52:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:39.391 11:52:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:39.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:39.659 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:39.921 11:52:45 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:39.921 11:52:45 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:39.921 11:52:45 -- setup/hugepages.sh@89 -- # local node 00:08:39.921 11:52:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:39.921 11:52:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:39.921 11:52:45 -- setup/hugepages.sh@92 -- # local surp 00:08:39.921 11:52:45 -- setup/hugepages.sh@93 -- # local resv 00:08:39.921 11:52:45 -- setup/hugepages.sh@94 -- # local anon 00:08:39.921 11:52:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:39.921 11:52:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:39.921 11:52:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:39.921 11:52:45 -- setup/common.sh@18 -- # local node= 00:08:39.921 11:52:45 -- setup/common.sh@19 -- # local var val 00:08:39.921 11:52:45 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.921 11:52:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.921 11:52:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.921 11:52:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.921 11:52:45 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.921 11:52:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4592564 kB' 'MemAvailable: 10521828 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554880 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 146988 kB' 'Active(file): 1590392 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165528 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323520 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 4468 kB' 'PageTables: 3700 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19620 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.921 11:52:45 -- setup/common.sh@32 -- # continue 00:08:39.921 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.183 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.183 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.184 11:52:45 -- setup/common.sh@33 -- # echo 0 00:08:40.184 11:52:45 -- setup/common.sh@33 -- # return 0 00:08:40.184 11:52:45 -- setup/hugepages.sh@97 -- # anon=0 00:08:40.184 11:52:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:40.184 11:52:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:40.184 11:52:45 -- setup/common.sh@18 -- # local node= 00:08:40.184 11:52:45 -- setup/common.sh@19 -- # local var val 00:08:40.184 11:52:45 -- setup/common.sh@20 -- # local mem_f mem 00:08:40.184 11:52:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.184 11:52:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.184 11:52:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.184 11:52:45 -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.184 11:52:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4592564 kB' 'MemAvailable: 10521828 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554940 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 147048 kB' 'Active(file): 1590392 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165600 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323520 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 4484 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19620 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.184 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.184 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.185 11:52:45 -- setup/common.sh@33 -- # echo 0 00:08:40.185 11:52:45 -- setup/common.sh@33 -- # return 0 00:08:40.185 11:52:45 -- setup/hugepages.sh@99 -- # surp=0 00:08:40.185 11:52:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:40.185 11:52:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:40.185 11:52:45 -- setup/common.sh@18 -- # local node= 00:08:40.185 11:52:45 -- setup/common.sh@19 -- # local var val 00:08:40.185 11:52:45 -- setup/common.sh@20 -- # local mem_f mem 00:08:40.185 11:52:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.185 11:52:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.185 11:52:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.185 11:52:45 -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.185 11:52:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4592816 kB' 'MemAvailable: 10522080 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554912 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 147020 kB' 'Active(file): 1590392 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165604 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323520 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 4484 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.185 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.185 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.186 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.186 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.186 11:52:45 -- setup/common.sh@33 -- # echo 0 00:08:40.186 11:52:45 -- setup/common.sh@33 -- # return 0 00:08:40.186 11:52:45 -- setup/hugepages.sh@100 -- # resv=0 00:08:40.186 11:52:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:40.186 nr_hugepages=512 00:08:40.186 11:52:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:40.186 resv_hugepages=0 00:08:40.186 11:52:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:40.186 surplus_hugepages=0 00:08:40.186 11:52:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:40.186 anon_hugepages=0 00:08:40.186 11:52:45 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:40.186 11:52:45 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:40.186 11:52:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:40.186 11:52:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:40.186 11:52:45 -- setup/common.sh@18 -- # local node= 00:08:40.186 11:52:45 -- setup/common.sh@19 -- # local var val 00:08:40.187 11:52:45 -- setup/common.sh@20 -- # local mem_f mem 00:08:40.187 11:52:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.187 11:52:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.187 11:52:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.187 11:52:45 -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.187 11:52:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4593348 kB' 'MemAvailable: 10522612 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554800 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 146908 kB' 'Active(file): 1590392 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165536 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323520 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 4448 kB' 'PageTables: 3808 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 536972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.187 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.187 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.188 11:52:45 -- setup/common.sh@33 -- # echo 512 00:08:40.188 11:52:45 -- setup/common.sh@33 -- # return 0 00:08:40.188 11:52:45 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:40.188 11:52:45 -- setup/hugepages.sh@112 -- # get_nodes 00:08:40.188 11:52:45 -- setup/hugepages.sh@27 -- # local node 00:08:40.188 11:52:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:40.188 11:52:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:40.188 11:52:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:40.188 11:52:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:40.188 11:52:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:40.188 11:52:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:40.188 11:52:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:40.188 11:52:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:40.188 11:52:45 -- setup/common.sh@18 -- # local node=0 00:08:40.188 11:52:45 -- setup/common.sh@19 -- # local var val 00:08:40.188 11:52:45 -- setup/common.sh@20 -- # local mem_f mem 00:08:40.188 11:52:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.188 11:52:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:40.188 11:52:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:40.188 11:52:45 -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.188 11:52:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4593348 kB' 'MemUsed: 7649608 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554996 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 147104 kB' 'Active(file): 1590392 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'FilePages: 6009952 kB' 'Mapped: 73444 kB' 'AnonPages: 165456 kB' 'Shmem: 2604 kB' 'KernelStack: 4484 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323520 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.188 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.188 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # continue 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # IFS=': ' 00:08:40.189 11:52:45 -- setup/common.sh@31 -- # read -r var val _ 00:08:40.189 11:52:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.189 11:52:45 -- setup/common.sh@33 -- # echo 0 00:08:40.189 11:52:45 -- setup/common.sh@33 -- # return 0 00:08:40.189 11:52:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:40.189 11:52:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:40.189 node0=512 expecting 512 00:08:40.189 ************************************ 00:08:40.189 END TEST per_node_1G_alloc 00:08:40.189 ************************************ 00:08:40.189 11:52:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:40.189 11:52:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:40.189 11:52:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:40.189 11:52:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:40.189 00:08:40.189 real 0m0.853s 00:08:40.189 user 0m0.338s 00:08:40.189 sys 0m0.453s 00:08:40.189 11:52:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.189 11:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.448 11:52:45 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:40.448 11:52:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.448 11:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.448 11:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.448 ************************************ 00:08:40.448 START TEST even_2G_alloc 00:08:40.448 ************************************ 00:08:40.448 11:52:45 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:08:40.448 11:52:45 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:40.448 11:52:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:40.448 11:52:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:40.448 11:52:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:40.448 11:52:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:40.448 11:52:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:40.448 11:52:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:40.448 11:52:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:40.448 11:52:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:40.448 11:52:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:40.448 11:52:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:40.448 11:52:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:40.448 11:52:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:40.448 11:52:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:40.448 11:52:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:40.448 11:52:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:40.448 11:52:45 -- setup/hugepages.sh@83 -- # : 0 00:08:40.448 11:52:45 -- setup/hugepages.sh@84 -- # : 0 00:08:40.448 11:52:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:40.448 11:52:45 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:40.448 11:52:45 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:40.448 11:52:45 -- setup/hugepages.sh@153 -- # setup output 00:08:40.448 11:52:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:40.448 11:52:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:40.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:40.706 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.278 11:52:46 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:41.278 11:52:46 -- setup/hugepages.sh@89 -- # local node 00:08:41.278 11:52:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:41.278 11:52:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:41.278 11:52:46 -- setup/hugepages.sh@92 -- # local surp 00:08:41.278 11:52:46 -- setup/hugepages.sh@93 -- # local resv 00:08:41.278 11:52:46 -- setup/hugepages.sh@94 -- # local anon 00:08:41.278 11:52:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:41.278 11:52:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:41.278 11:52:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:41.278 11:52:46 -- setup/common.sh@18 -- # local node= 00:08:41.278 11:52:46 -- setup/common.sh@19 -- # local var val 00:08:41.278 11:52:46 -- setup/common.sh@20 -- # local mem_f mem 00:08:41.278 11:52:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.278 11:52:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.278 11:52:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.278 11:52:46 -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.278 11:52:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3544736 kB' 'MemAvailable: 9474000 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4554728 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 146840 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165440 kB' 'Mapped: 73452 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323784 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71288 kB' 'KernelStack: 4464 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.278 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.278 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.279 11:52:46 -- setup/common.sh@33 -- # echo 0 00:08:41.279 11:52:46 -- setup/common.sh@33 -- # return 0 00:08:41.279 11:52:46 -- setup/hugepages.sh@97 -- # anon=0 00:08:41.279 11:52:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:41.279 11:52:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.279 11:52:46 -- setup/common.sh@18 -- # local node= 00:08:41.279 11:52:46 -- setup/common.sh@19 -- # local var val 00:08:41.279 11:52:46 -- setup/common.sh@20 -- # local mem_f mem 00:08:41.279 11:52:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.279 11:52:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.279 11:52:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.279 11:52:46 -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.279 11:52:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3545492 kB' 'MemAvailable: 9474756 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554552 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146664 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165260 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323800 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71304 kB' 'KernelStack: 4448 kB' 'PageTables: 3724 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.279 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.279 11:52:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.280 11:52:46 -- setup/common.sh@33 -- # echo 0 00:08:41.280 11:52:46 -- setup/common.sh@33 -- # return 0 00:08:41.280 11:52:46 -- setup/hugepages.sh@99 -- # surp=0 00:08:41.280 11:52:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:41.280 11:52:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:41.280 11:52:46 -- setup/common.sh@18 -- # local node= 00:08:41.280 11:52:46 -- setup/common.sh@19 -- # local var val 00:08:41.280 11:52:46 -- setup/common.sh@20 -- # local mem_f mem 00:08:41.280 11:52:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.280 11:52:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.280 11:52:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.280 11:52:46 -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.280 11:52:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3546012 kB' 'MemAvailable: 9475276 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554564 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146676 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165356 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323800 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71304 kB' 'KernelStack: 4464 kB' 'PageTables: 3760 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19588 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.280 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.280 11:52:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.281 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.281 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.282 11:52:46 -- setup/common.sh@33 -- # echo 0 00:08:41.282 11:52:46 -- setup/common.sh@33 -- # return 0 00:08:41.282 11:52:46 -- setup/hugepages.sh@100 -- # resv=0 00:08:41.282 11:52:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:41.282 nr_hugepages=1024 00:08:41.282 11:52:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:41.282 resv_hugepages=0 00:08:41.282 11:52:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:41.282 surplus_hugepages=0 00:08:41.282 11:52:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:41.282 anon_hugepages=0 00:08:41.282 11:52:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:41.282 11:52:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:41.282 11:52:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:41.282 11:52:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:41.282 11:52:46 -- setup/common.sh@18 -- # local node= 00:08:41.282 11:52:46 -- setup/common.sh@19 -- # local var val 00:08:41.282 11:52:46 -- setup/common.sh@20 -- # local mem_f mem 00:08:41.282 11:52:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.282 11:52:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.282 11:52:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.282 11:52:46 -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.282 11:52:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3546248 kB' 'MemAvailable: 9475512 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554552 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146664 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165276 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323800 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71304 kB' 'KernelStack: 4448 kB' 'PageTables: 3724 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.282 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.282 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.283 11:52:46 -- setup/common.sh@33 -- # echo 1024 00:08:41.283 11:52:46 -- setup/common.sh@33 -- # return 0 00:08:41.283 11:52:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:41.283 11:52:46 -- setup/hugepages.sh@112 -- # get_nodes 00:08:41.283 11:52:46 -- setup/hugepages.sh@27 -- # local node 00:08:41.283 11:52:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:41.283 11:52:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:41.283 11:52:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:41.283 11:52:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:41.283 11:52:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:41.283 11:52:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:41.283 11:52:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:41.283 11:52:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.283 11:52:46 -- setup/common.sh@18 -- # local node=0 00:08:41.283 11:52:46 -- setup/common.sh@19 -- # local var val 00:08:41.283 11:52:46 -- setup/common.sh@20 -- # local mem_f mem 00:08:41.283 11:52:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.283 11:52:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:41.283 11:52:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:41.283 11:52:46 -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.283 11:52:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3546248 kB' 'MemUsed: 8696708 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4555040 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 147152 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'FilePages: 6009952 kB' 'Mapped: 73444 kB' 'AnonPages: 165760 kB' 'Shmem: 2604 kB' 'KernelStack: 4496 kB' 'PageTables: 3836 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323808 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.283 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.283 11:52:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.284 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.284 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # continue 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # IFS=': ' 00:08:41.543 11:52:46 -- setup/common.sh@31 -- # read -r var val _ 00:08:41.543 11:52:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.543 11:52:46 -- setup/common.sh@33 -- # echo 0 00:08:41.543 11:52:46 -- setup/common.sh@33 -- # return 0 00:08:41.543 11:52:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:41.543 11:52:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:41.543 11:52:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:41.543 11:52:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:41.543 11:52:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:41.543 node0=1024 expecting 1024 00:08:41.543 11:52:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:41.543 00:08:41.543 real 0m1.087s 00:08:41.543 user 0m0.307s 00:08:41.543 sys 0m0.720s 00:08:41.543 11:52:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.543 11:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.543 ************************************ 00:08:41.543 END TEST even_2G_alloc 00:08:41.543 ************************************ 00:08:41.543 11:52:46 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:41.543 11:52:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:41.544 11:52:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.544 11:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.544 ************************************ 00:08:41.544 START TEST odd_alloc 00:08:41.544 ************************************ 00:08:41.544 11:52:46 -- common/autotest_common.sh@1114 -- # odd_alloc 00:08:41.544 11:52:46 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:41.544 11:52:46 -- setup/hugepages.sh@49 -- # local size=2098176 00:08:41.544 11:52:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:41.544 11:52:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:41.544 11:52:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:41.544 11:52:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:41.544 11:52:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:41.544 11:52:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:41.544 11:52:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:41.544 11:52:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:41.544 11:52:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:41.544 11:52:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:41.544 11:52:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:41.544 11:52:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:41.544 11:52:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:41.544 11:52:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:41.544 11:52:46 -- setup/hugepages.sh@83 -- # : 0 00:08:41.544 11:52:46 -- setup/hugepages.sh@84 -- # : 0 00:08:41.544 11:52:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:41.544 11:52:46 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:41.544 11:52:46 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:41.544 11:52:46 -- setup/hugepages.sh@160 -- # setup output 00:08:41.544 11:52:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:41.544 11:52:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:41.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:41.802 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.374 11:52:47 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:42.374 11:52:47 -- setup/hugepages.sh@89 -- # local node 00:08:42.374 11:52:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:42.374 11:52:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:42.374 11:52:47 -- setup/hugepages.sh@92 -- # local surp 00:08:42.374 11:52:47 -- setup/hugepages.sh@93 -- # local resv 00:08:42.374 11:52:47 -- setup/hugepages.sh@94 -- # local anon 00:08:42.374 11:52:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:42.374 11:52:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:42.374 11:52:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:42.374 11:52:47 -- setup/common.sh@18 -- # local node= 00:08:42.374 11:52:47 -- setup/common.sh@19 -- # local var val 00:08:42.374 11:52:47 -- setup/common.sh@20 -- # local mem_f mem 00:08:42.374 11:52:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.374 11:52:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.374 11:52:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.374 11:52:47 -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.374 11:52:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3540720 kB' 'MemAvailable: 9469984 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4554648 kB' 'Active(anon): 1096 kB' 'Inactive(anon): 146760 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165416 kB' 'Mapped: 73480 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323712 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71216 kB' 'KernelStack: 4436 kB' 'PageTables: 3804 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071876 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19604 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.374 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.374 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.375 11:52:47 -- setup/common.sh@33 -- # echo 0 00:08:42.375 11:52:47 -- setup/common.sh@33 -- # return 0 00:08:42.375 11:52:47 -- setup/hugepages.sh@97 -- # anon=0 00:08:42.375 11:52:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:42.375 11:52:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.375 11:52:47 -- setup/common.sh@18 -- # local node= 00:08:42.375 11:52:47 -- setup/common.sh@19 -- # local var val 00:08:42.375 11:52:47 -- setup/common.sh@20 -- # local mem_f mem 00:08:42.375 11:52:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.375 11:52:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.375 11:52:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.375 11:52:47 -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.375 11:52:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3540720 kB' 'MemAvailable: 9469984 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554600 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146712 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165308 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323736 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71240 kB' 'KernelStack: 4480 kB' 'PageTables: 3796 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071876 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19620 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.375 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.375 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.376 11:52:47 -- setup/common.sh@33 -- # echo 0 00:08:42.376 11:52:47 -- setup/common.sh@33 -- # return 0 00:08:42.376 11:52:47 -- setup/hugepages.sh@99 -- # surp=0 00:08:42.376 11:52:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:42.376 11:52:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:42.376 11:52:47 -- setup/common.sh@18 -- # local node= 00:08:42.376 11:52:47 -- setup/common.sh@19 -- # local var val 00:08:42.376 11:52:47 -- setup/common.sh@20 -- # local mem_f mem 00:08:42.376 11:52:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.376 11:52:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.376 11:52:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.376 11:52:47 -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.376 11:52:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3540744 kB' 'MemAvailable: 9470008 kB' 'Buffers: 40196 kB' 'Cached: 5969756 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554580 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146692 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165604 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323736 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71240 kB' 'KernelStack: 4528 kB' 'PageTables: 3912 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071876 kB' 'Committed_AS: 536560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.376 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.376 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.377 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.377 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.377 11:52:47 -- setup/common.sh@33 -- # echo 0 00:08:42.377 11:52:47 -- setup/common.sh@33 -- # return 0 00:08:42.377 nr_hugepages=1025 00:08:42.377 resv_hugepages=0 00:08:42.377 surplus_hugepages=0 00:08:42.377 anon_hugepages=0 00:08:42.377 11:52:47 -- setup/hugepages.sh@100 -- # resv=0 00:08:42.377 11:52:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:42.377 11:52:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:42.377 11:52:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:42.377 11:52:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:42.377 11:52:47 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:42.377 11:52:47 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:42.377 11:52:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:42.377 11:52:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:42.377 11:52:47 -- setup/common.sh@18 -- # local node= 00:08:42.377 11:52:47 -- setup/common.sh@19 -- # local var val 00:08:42.377 11:52:47 -- setup/common.sh@20 -- # local mem_f mem 00:08:42.377 11:52:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.378 11:52:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.378 11:52:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.378 11:52:47 -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.378 11:52:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3541732 kB' 'MemAvailable: 9470996 kB' 'Buffers: 40196 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554600 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146712 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'AnonPages: 165424 kB' 'Mapped: 73444 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323736 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71240 kB' 'KernelStack: 4480 kB' 'PageTables: 3796 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071876 kB' 'Committed_AS: 536948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.378 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.378 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.379 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.379 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.667 11:52:47 -- setup/common.sh@33 -- # echo 1025 00:08:42.667 11:52:47 -- setup/common.sh@33 -- # return 0 00:08:42.667 11:52:47 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:42.667 11:52:47 -- setup/hugepages.sh@112 -- # get_nodes 00:08:42.667 11:52:47 -- setup/hugepages.sh@27 -- # local node 00:08:42.667 11:52:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:42.667 11:52:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:42.667 11:52:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:42.667 11:52:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:42.667 11:52:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:42.667 11:52:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:42.667 11:52:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:42.667 11:52:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.667 11:52:47 -- setup/common.sh@18 -- # local node=0 00:08:42.667 11:52:47 -- setup/common.sh@19 -- # local var val 00:08:42.667 11:52:47 -- setup/common.sh@20 -- # local mem_f mem 00:08:42.667 11:52:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.667 11:52:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:42.667 11:52:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:42.667 11:52:47 -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.667 11:52:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.667 11:52:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3541732 kB' 'MemUsed: 8701224 kB' 'SwapCached: 0 kB' 'Active: 1591476 kB' 'Inactive: 4554532 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 146644 kB' 'Active(file): 1590396 kB' 'Inactive(file): 4407888 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 668 kB' 'Writeback: 0 kB' 'FilePages: 6009956 kB' 'Mapped: 73444 kB' 'AnonPages: 165300 kB' 'Shmem: 2604 kB' 'KernelStack: 4532 kB' 'PageTables: 3760 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323736 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.667 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.667 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # continue 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # IFS=': ' 00:08:42.668 11:52:47 -- setup/common.sh@31 -- # read -r var val _ 00:08:42.668 11:52:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.668 11:52:47 -- setup/common.sh@33 -- # echo 0 00:08:42.668 11:52:47 -- setup/common.sh@33 -- # return 0 00:08:42.668 11:52:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:42.668 11:52:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:42.668 11:52:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:42.668 11:52:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:42.668 11:52:47 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:42.668 node0=1025 expecting 1025 00:08:42.668 11:52:47 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:42.668 00:08:42.668 real 0m1.086s 00:08:42.668 user 0m0.305s 00:08:42.668 sys 0m0.726s 00:08:42.668 11:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.668 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.668 ************************************ 00:08:42.668 END TEST odd_alloc 00:08:42.668 ************************************ 00:08:42.668 11:52:47 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:42.668 11:52:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.668 11:52:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.668 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.668 ************************************ 00:08:42.668 START TEST custom_alloc 00:08:42.668 ************************************ 00:08:42.668 11:52:47 -- common/autotest_common.sh@1114 -- # custom_alloc 00:08:42.668 11:52:47 -- setup/hugepages.sh@167 -- # local IFS=, 00:08:42.668 11:52:47 -- setup/hugepages.sh@169 -- # local node 00:08:42.668 11:52:47 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:42.668 11:52:47 -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:42.668 11:52:47 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:42.668 11:52:47 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:42.668 11:52:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:42.668 11:52:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:42.668 11:52:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:42.668 11:52:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:42.669 11:52:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:42.669 11:52:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:42.669 11:52:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.669 11:52:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:42.669 11:52:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.669 11:52:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.669 11:52:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.669 11:52:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:42.669 11:52:47 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:42.669 11:52:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.669 11:52:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:42.669 11:52:47 -- setup/hugepages.sh@83 -- # : 0 00:08:42.669 11:52:47 -- setup/hugepages.sh@84 -- # : 0 00:08:42.669 11:52:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.669 11:52:47 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:42.669 11:52:47 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:42.669 11:52:47 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:42.669 11:52:47 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:42.669 11:52:47 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:42.669 11:52:47 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:42.669 11:52:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:42.669 11:52:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.669 11:52:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:42.669 11:52:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.669 11:52:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.669 11:52:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.669 11:52:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:42.669 11:52:48 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:42.669 11:52:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:42.669 11:52:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:42.669 11:52:48 -- setup/hugepages.sh@78 -- # return 0 00:08:42.669 11:52:48 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:42.669 11:52:48 -- setup/hugepages.sh@187 -- # setup output 00:08:42.669 11:52:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:42.669 11:52:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:42.927 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.187 11:52:48 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:43.187 11:52:48 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:43.187 11:52:48 -- setup/hugepages.sh@89 -- # local node 00:08:43.187 11:52:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:43.187 11:52:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:43.187 11:52:48 -- setup/hugepages.sh@92 -- # local surp 00:08:43.187 11:52:48 -- setup/hugepages.sh@93 -- # local resv 00:08:43.187 11:52:48 -- setup/hugepages.sh@94 -- # local anon 00:08:43.187 11:52:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:43.187 11:52:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:43.187 11:52:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:43.187 11:52:48 -- setup/common.sh@18 -- # local node= 00:08:43.187 11:52:48 -- setup/common.sh@19 -- # local var val 00:08:43.187 11:52:48 -- setup/common.sh@20 -- # local mem_f mem 00:08:43.187 11:52:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.187 11:52:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.187 11:52:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.187 11:52:48 -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.187 11:52:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.187 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4595052 kB' 'MemAvailable: 10524328 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550796 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 142904 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161800 kB' 'Mapped: 72720 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323488 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70992 kB' 'KernelStack: 4388 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 526548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.188 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.188 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.189 11:52:48 -- setup/common.sh@33 -- # echo 0 00:08:43.189 11:52:48 -- setup/common.sh@33 -- # return 0 00:08:43.189 11:52:48 -- setup/hugepages.sh@97 -- # anon=0 00:08:43.189 11:52:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:43.189 11:52:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.189 11:52:48 -- setup/common.sh@18 -- # local node= 00:08:43.189 11:52:48 -- setup/common.sh@19 -- # local var val 00:08:43.189 11:52:48 -- setup/common.sh@20 -- # local mem_f mem 00:08:43.189 11:52:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.189 11:52:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.189 11:52:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.189 11:52:48 -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.189 11:52:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4595284 kB' 'MemAvailable: 10524560 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550808 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142916 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161508 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323584 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71088 kB' 'KernelStack: 4368 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 526548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.189 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.189 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.190 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.190 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.451 11:52:48 -- setup/common.sh@33 -- # echo 0 00:08:43.451 11:52:48 -- setup/common.sh@33 -- # return 0 00:08:43.451 11:52:48 -- setup/hugepages.sh@99 -- # surp=0 00:08:43.451 11:52:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:43.451 11:52:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:43.451 11:52:48 -- setup/common.sh@18 -- # local node= 00:08:43.451 11:52:48 -- setup/common.sh@19 -- # local var val 00:08:43.451 11:52:48 -- setup/common.sh@20 -- # local mem_f mem 00:08:43.451 11:52:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.451 11:52:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.451 11:52:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.451 11:52:48 -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.451 11:52:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.451 11:52:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4595284 kB' 'MemAvailable: 10524560 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550856 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142964 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161592 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323584 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71088 kB' 'KernelStack: 4400 kB' 'PageTables: 3508 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 526548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.451 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.451 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.452 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.452 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.453 11:52:48 -- setup/common.sh@33 -- # echo 0 00:08:43.453 11:52:48 -- setup/common.sh@33 -- # return 0 00:08:43.453 11:52:48 -- setup/hugepages.sh@100 -- # resv=0 00:08:43.453 11:52:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:43.453 nr_hugepages=512 00:08:43.453 11:52:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:43.453 resv_hugepages=0 00:08:43.453 11:52:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:43.453 surplus_hugepages=0 00:08:43.453 11:52:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:43.453 anon_hugepages=0 00:08:43.453 11:52:48 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:43.453 11:52:48 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:43.453 11:52:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:43.453 11:52:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:43.453 11:52:48 -- setup/common.sh@18 -- # local node= 00:08:43.453 11:52:48 -- setup/common.sh@19 -- # local var val 00:08:43.453 11:52:48 -- setup/common.sh@20 -- # local mem_f mem 00:08:43.453 11:52:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.453 11:52:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.453 11:52:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.453 11:52:48 -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.453 11:52:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4595284 kB' 'MemAvailable: 10524560 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550464 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142572 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161180 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323592 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71096 kB' 'KernelStack: 4368 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597188 kB' 'Committed_AS: 526548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.453 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.453 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.454 11:52:48 -- setup/common.sh@33 -- # echo 512 00:08:43.454 11:52:48 -- setup/common.sh@33 -- # return 0 00:08:43.454 11:52:48 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:43.454 11:52:48 -- setup/hugepages.sh@112 -- # get_nodes 00:08:43.454 11:52:48 -- setup/hugepages.sh@27 -- # local node 00:08:43.454 11:52:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:43.454 11:52:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:43.454 11:52:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:43.454 11:52:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:43.454 11:52:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:43.454 11:52:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:43.454 11:52:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:43.454 11:52:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.454 11:52:48 -- setup/common.sh@18 -- # local node=0 00:08:43.454 11:52:48 -- setup/common.sh@19 -- # local var val 00:08:43.454 11:52:48 -- setup/common.sh@20 -- # local mem_f mem 00:08:43.454 11:52:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.454 11:52:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:43.454 11:52:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:43.454 11:52:48 -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.454 11:52:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 4595536 kB' 'MemUsed: 7647420 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550420 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142528 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 6009964 kB' 'Mapped: 72776 kB' 'AnonPages: 161396 kB' 'Shmem: 2604 kB' 'KernelStack: 4404 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323592 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 71096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.454 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.454 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # continue 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:43.455 11:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:43.455 11:52:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.455 11:52:48 -- setup/common.sh@33 -- # echo 0 00:08:43.455 11:52:48 -- setup/common.sh@33 -- # return 0 00:08:43.455 11:52:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:43.455 11:52:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:43.455 11:52:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:43.455 11:52:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:43.455 11:52:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:43.455 node0=512 expecting 512 00:08:43.455 11:52:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:43.455 00:08:43.455 real 0m0.866s 00:08:43.455 user 0m0.316s 00:08:43.455 sys 0m0.486s 00:08:43.455 11:52:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.455 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.455 ************************************ 00:08:43.455 END TEST custom_alloc 00:08:43.455 ************************************ 00:08:43.455 11:52:48 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:43.455 11:52:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.455 11:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.455 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.455 ************************************ 00:08:43.455 START TEST no_shrink_alloc 00:08:43.455 ************************************ 00:08:43.455 11:52:48 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:08:43.455 11:52:48 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:43.455 11:52:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:43.455 11:52:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:43.455 11:52:48 -- setup/hugepages.sh@51 -- # shift 00:08:43.455 11:52:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:43.455 11:52:48 -- setup/hugepages.sh@52 -- # local node_ids 00:08:43.455 11:52:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:43.455 11:52:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:43.455 11:52:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:43.455 11:52:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:43.455 11:52:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:43.455 11:52:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:43.455 11:52:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:43.455 11:52:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:43.455 11:52:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:43.455 11:52:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:43.455 11:52:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:43.455 11:52:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:43.455 11:52:48 -- setup/hugepages.sh@73 -- # return 0 00:08:43.455 11:52:48 -- setup/hugepages.sh@198 -- # setup output 00:08:43.455 11:52:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:43.455 11:52:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:43.973 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:44.541 11:52:49 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:44.541 11:52:49 -- setup/hugepages.sh@89 -- # local node 00:08:44.541 11:52:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:44.541 11:52:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:44.541 11:52:49 -- setup/hugepages.sh@92 -- # local surp 00:08:44.542 11:52:49 -- setup/hugepages.sh@93 -- # local resv 00:08:44.542 11:52:49 -- setup/hugepages.sh@94 -- # local anon 00:08:44.542 11:52:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:44.542 11:52:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:44.542 11:52:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:44.542 11:52:49 -- setup/common.sh@18 -- # local node= 00:08:44.542 11:52:49 -- setup/common.sh@19 -- # local var val 00:08:44.542 11:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:44.542 11:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.542 11:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.542 11:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.542 11:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.542 11:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3550936 kB' 'MemAvailable: 9480212 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550952 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143060 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161684 kB' 'Mapped: 72772 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323288 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 4392 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.542 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.542 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.543 11:52:49 -- setup/common.sh@33 -- # echo 0 00:08:44.543 11:52:49 -- setup/common.sh@33 -- # return 0 00:08:44.543 11:52:49 -- setup/hugepages.sh@97 -- # anon=0 00:08:44.543 11:52:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:44.543 11:52:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.543 11:52:49 -- setup/common.sh@18 -- # local node= 00:08:44.543 11:52:49 -- setup/common.sh@19 -- # local var val 00:08:44.543 11:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:44.543 11:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.543 11:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.543 11:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.543 11:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.543 11:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3550936 kB' 'MemAvailable: 9480212 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550716 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 142824 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161500 kB' 'Mapped: 72772 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323288 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 4376 kB' 'PageTables: 3368 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.543 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.543 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.544 11:52:49 -- setup/common.sh@33 -- # echo 0 00:08:44.544 11:52:49 -- setup/common.sh@33 -- # return 0 00:08:44.544 11:52:49 -- setup/hugepages.sh@99 -- # surp=0 00:08:44.544 11:52:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:44.544 11:52:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:44.544 11:52:49 -- setup/common.sh@18 -- # local node= 00:08:44.544 11:52:49 -- setup/common.sh@19 -- # local var val 00:08:44.544 11:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:44.544 11:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.544 11:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.544 11:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.544 11:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.544 11:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3550936 kB' 'MemAvailable: 9480212 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550644 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142752 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161640 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323288 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 4368 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.544 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.544 11:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.545 11:52:49 -- setup/common.sh@33 -- # echo 0 00:08:44.545 11:52:49 -- setup/common.sh@33 -- # return 0 00:08:44.545 nr_hugepages=1024 00:08:44.545 resv_hugepages=0 00:08:44.545 surplus_hugepages=0 00:08:44.545 anon_hugepages=0 00:08:44.545 11:52:49 -- setup/hugepages.sh@100 -- # resv=0 00:08:44.545 11:52:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:44.545 11:52:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:44.545 11:52:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:44.545 11:52:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:44.545 11:52:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.545 11:52:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:44.545 11:52:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:44.545 11:52:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:44.545 11:52:49 -- setup/common.sh@18 -- # local node= 00:08:44.545 11:52:49 -- setup/common.sh@19 -- # local var val 00:08:44.545 11:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:44.545 11:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.545 11:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.545 11:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.545 11:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.545 11:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3551440 kB' 'MemAvailable: 9480716 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550532 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142640 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 161264 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323288 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 4356 kB' 'PageTables: 3236 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.545 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.545 11:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.546 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.546 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.547 11:52:49 -- setup/common.sh@33 -- # echo 1024 00:08:44.547 11:52:49 -- setup/common.sh@33 -- # return 0 00:08:44.547 11:52:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.547 11:52:49 -- setup/hugepages.sh@112 -- # get_nodes 00:08:44.547 11:52:49 -- setup/hugepages.sh@27 -- # local node 00:08:44.547 11:52:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:44.547 11:52:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:44.547 11:52:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:44.547 11:52:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:44.547 11:52:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:44.547 11:52:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:44.547 11:52:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:44.547 11:52:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.547 11:52:49 -- setup/common.sh@18 -- # local node=0 00:08:44.547 11:52:49 -- setup/common.sh@19 -- # local var val 00:08:44.547 11:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:44.547 11:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.547 11:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:44.547 11:52:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:44.547 11:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.547 11:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3551440 kB' 'MemUsed: 8691516 kB' 'SwapCached: 0 kB' 'Active: 1591484 kB' 'Inactive: 4550780 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142888 kB' 'Active(file): 1590404 kB' 'Inactive(file): 4407892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 6009964 kB' 'Mapped: 72776 kB' 'AnonPages: 161544 kB' 'Shmem: 2604 kB' 'KernelStack: 4368 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323288 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.547 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.547 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # continue 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:44.548 11:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:44.548 11:52:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.548 11:52:49 -- setup/common.sh@33 -- # echo 0 00:08:44.548 11:52:49 -- setup/common.sh@33 -- # return 0 00:08:44.548 11:52:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:44.548 11:52:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:44.548 11:52:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:44.548 11:52:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:44.548 11:52:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:44.548 node0=1024 expecting 1024 00:08:44.548 11:52:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:44.548 11:52:49 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:44.548 11:52:49 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:44.548 11:52:49 -- setup/hugepages.sh@202 -- # setup output 00:08:44.548 11:52:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.548 11:52:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:44.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:08:44.806 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:45.067 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:45.067 11:52:50 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:45.067 11:52:50 -- setup/hugepages.sh@89 -- # local node 00:08:45.067 11:52:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:45.067 11:52:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:45.067 11:52:50 -- setup/hugepages.sh@92 -- # local surp 00:08:45.067 11:52:50 -- setup/hugepages.sh@93 -- # local resv 00:08:45.067 11:52:50 -- setup/hugepages.sh@94 -- # local anon 00:08:45.067 11:52:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:45.067 11:52:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:45.067 11:52:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:45.067 11:52:50 -- setup/common.sh@18 -- # local node= 00:08:45.067 11:52:50 -- setup/common.sh@19 -- # local var val 00:08:45.067 11:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:45.067 11:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:45.067 11:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:45.067 11:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:45.068 11:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:45.068 11:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3548496 kB' 'MemAvailable: 9477772 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591500 kB' 'Inactive: 4551392 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143508 kB' 'Active(file): 1590412 kB' 'Inactive(file): 4407884 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 162076 kB' 'Mapped: 73044 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323176 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70680 kB' 'KernelStack: 4584 kB' 'PageTables: 4080 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.068 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.068 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:45.069 11:52:50 -- setup/common.sh@33 -- # echo 0 00:08:45.069 11:52:50 -- setup/common.sh@33 -- # return 0 00:08:45.069 11:52:50 -- setup/hugepages.sh@97 -- # anon=0 00:08:45.069 11:52:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:45.069 11:52:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:45.069 11:52:50 -- setup/common.sh@18 -- # local node= 00:08:45.069 11:52:50 -- setup/common.sh@19 -- # local var val 00:08:45.069 11:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:45.069 11:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:45.069 11:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:45.069 11:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:45.069 11:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:45.069 11:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3548740 kB' 'MemAvailable: 9478016 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550928 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143044 kB' 'Active(file): 1590412 kB' 'Inactive(file): 4407884 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 161640 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323328 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70832 kB' 'KernelStack: 4400 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.069 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.069 11:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.070 11:52:50 -- setup/common.sh@33 -- # echo 0 00:08:45.070 11:52:50 -- setup/common.sh@33 -- # return 0 00:08:45.070 11:52:50 -- setup/hugepages.sh@99 -- # surp=0 00:08:45.070 11:52:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:45.070 11:52:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:45.070 11:52:50 -- setup/common.sh@18 -- # local node= 00:08:45.070 11:52:50 -- setup/common.sh@19 -- # local var val 00:08:45.070 11:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:45.070 11:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:45.070 11:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:45.070 11:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:45.070 11:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:45.070 11:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3548740 kB' 'MemAvailable: 9478016 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550768 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142884 kB' 'Active(file): 1590412 kB' 'Inactive(file): 4407884 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 161524 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323328 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70832 kB' 'KernelStack: 4400 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.070 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.070 11:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:45.071 11:52:50 -- setup/common.sh@33 -- # echo 0 00:08:45.071 11:52:50 -- setup/common.sh@33 -- # return 0 00:08:45.071 nr_hugepages=1024 00:08:45.071 resv_hugepages=0 00:08:45.071 surplus_hugepages=0 00:08:45.071 anon_hugepages=0 00:08:45.071 11:52:50 -- setup/hugepages.sh@100 -- # resv=0 00:08:45.071 11:52:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:45.071 11:52:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:45.071 11:52:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:45.071 11:52:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:45.071 11:52:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:45.071 11:52:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:45.071 11:52:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:45.071 11:52:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:45.071 11:52:50 -- setup/common.sh@18 -- # local node= 00:08:45.071 11:52:50 -- setup/common.sh@19 -- # local var val 00:08:45.071 11:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:45.071 11:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:45.071 11:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:45.071 11:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:45.071 11:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:45.071 11:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3548740 kB' 'MemAvailable: 9478016 kB' 'Buffers: 40204 kB' 'Cached: 5969760 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550672 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142788 kB' 'Active(file): 1590412 kB' 'Inactive(file): 4407884 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 161400 kB' 'Mapped: 72776 kB' 'Shmem: 2604 kB' 'KReclaimable: 252496 kB' 'Slab: 323328 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70832 kB' 'KernelStack: 4368 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072900 kB' 'Committed_AS: 526748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.071 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.071 11:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.072 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.072 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:45.073 11:52:50 -- setup/common.sh@33 -- # echo 1024 00:08:45.073 11:52:50 -- setup/common.sh@33 -- # return 0 00:08:45.073 11:52:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:45.073 11:52:50 -- setup/hugepages.sh@112 -- # get_nodes 00:08:45.073 11:52:50 -- setup/hugepages.sh@27 -- # local node 00:08:45.073 11:52:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:45.073 11:52:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:45.073 11:52:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:45.073 11:52:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:45.073 11:52:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:45.073 11:52:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:45.073 11:52:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:45.073 11:52:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:45.073 11:52:50 -- setup/common.sh@18 -- # local node=0 00:08:45.073 11:52:50 -- setup/common.sh@19 -- # local var val 00:08:45.073 11:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:45.073 11:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:45.073 11:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:45.073 11:52:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:45.073 11:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:45.073 11:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242956 kB' 'MemFree: 3548740 kB' 'MemUsed: 8694216 kB' 'SwapCached: 0 kB' 'Active: 1591492 kB' 'Inactive: 4550756 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142872 kB' 'Active(file): 1590412 kB' 'Inactive(file): 4407884 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 6009964 kB' 'Mapped: 72776 kB' 'AnonPages: 161480 kB' 'Shmem: 2604 kB' 'KernelStack: 4384 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252496 kB' 'Slab: 323328 kB' 'SReclaimable: 252496 kB' 'SUnreclaim: 70832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.073 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.073 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # continue 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:45.074 11:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:45.074 11:52:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.074 11:52:50 -- setup/common.sh@33 -- # echo 0 00:08:45.074 11:52:50 -- setup/common.sh@33 -- # return 0 00:08:45.074 11:52:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:45.074 11:52:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:45.074 11:52:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:45.074 11:52:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:45.074 11:52:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:45.074 node0=1024 expecting 1024 00:08:45.074 11:52:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:45.074 00:08:45.074 real 0m1.657s 00:08:45.074 user 0m0.577s 00:08:45.074 sys 0m0.968s 00:08:45.074 11:52:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.074 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.074 ************************************ 00:08:45.074 END TEST no_shrink_alloc 00:08:45.074 ************************************ 00:08:45.332 11:52:50 -- setup/hugepages.sh@217 -- # clear_hp 00:08:45.332 11:52:50 -- setup/hugepages.sh@37 -- # local node hp 00:08:45.332 11:52:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:45.332 11:52:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.332 11:52:50 -- setup/hugepages.sh@41 -- # echo 0 00:08:45.332 11:52:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.332 11:52:50 -- setup/hugepages.sh@41 -- # echo 0 00:08:45.332 11:52:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:45.332 11:52:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:45.332 ************************************ 00:08:45.332 END TEST hugepages 00:08:45.332 ************************************ 00:08:45.332 00:08:45.332 real 0m7.309s 00:08:45.332 user 0m2.501s 00:08:45.332 sys 0m4.428s 00:08:45.332 11:52:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.332 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.332 11:52:50 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:45.332 11:52:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.332 11:52:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.332 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.332 ************************************ 00:08:45.332 START TEST driver 00:08:45.332 ************************************ 00:08:45.332 11:52:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:45.332 * Looking for test storage... 00:08:45.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:45.332 11:52:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:45.332 11:52:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:45.332 11:52:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:45.590 11:52:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:45.590 11:52:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:45.590 11:52:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:45.590 11:52:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:45.590 11:52:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:45.590 11:52:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:45.590 11:52:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.590 11:52:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:45.590 11:52:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:45.590 11:52:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:45.590 11:52:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:45.590 11:52:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:45.590 11:52:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:45.590 11:52:50 -- scripts/common.sh@344 -- # : 1 00:08:45.590 11:52:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:45.590 11:52:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.590 11:52:50 -- scripts/common.sh@364 -- # decimal 1 00:08:45.590 11:52:50 -- scripts/common.sh@352 -- # local d=1 00:08:45.590 11:52:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.590 11:52:50 -- scripts/common.sh@354 -- # echo 1 00:08:45.590 11:52:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:45.590 11:52:50 -- scripts/common.sh@365 -- # decimal 2 00:08:45.590 11:52:50 -- scripts/common.sh@352 -- # local d=2 00:08:45.590 11:52:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.590 11:52:50 -- scripts/common.sh@354 -- # echo 2 00:08:45.590 11:52:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:45.590 11:52:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:45.590 11:52:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:45.590 11:52:50 -- scripts/common.sh@367 -- # return 0 00:08:45.590 11:52:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.590 11:52:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:45.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.590 --rc genhtml_branch_coverage=1 00:08:45.590 --rc genhtml_function_coverage=1 00:08:45.590 --rc genhtml_legend=1 00:08:45.590 --rc geninfo_all_blocks=1 00:08:45.590 --rc geninfo_unexecuted_blocks=1 00:08:45.590 00:08:45.590 ' 00:08:45.590 11:52:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:45.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.590 --rc genhtml_branch_coverage=1 00:08:45.590 --rc genhtml_function_coverage=1 00:08:45.590 --rc genhtml_legend=1 00:08:45.590 --rc geninfo_all_blocks=1 00:08:45.590 --rc geninfo_unexecuted_blocks=1 00:08:45.590 00:08:45.590 ' 00:08:45.590 11:52:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:45.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.590 --rc genhtml_branch_coverage=1 00:08:45.590 --rc genhtml_function_coverage=1 00:08:45.590 --rc genhtml_legend=1 00:08:45.590 --rc geninfo_all_blocks=1 00:08:45.590 --rc geninfo_unexecuted_blocks=1 00:08:45.590 00:08:45.590 ' 00:08:45.590 11:52:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:45.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.590 --rc genhtml_branch_coverage=1 00:08:45.590 --rc genhtml_function_coverage=1 00:08:45.590 --rc genhtml_legend=1 00:08:45.590 --rc geninfo_all_blocks=1 00:08:45.590 --rc geninfo_unexecuted_blocks=1 00:08:45.590 00:08:45.590 ' 00:08:45.590 11:52:50 -- setup/driver.sh@68 -- # setup reset 00:08:45.590 11:52:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:45.590 11:52:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:45.849 11:52:51 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:45.849 11:52:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.849 11:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.849 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:45.849 ************************************ 00:08:45.849 START TEST guess_driver 00:08:45.849 ************************************ 00:08:45.849 11:52:51 -- common/autotest_common.sh@1114 -- # guess_driver 00:08:45.849 11:52:51 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:45.849 11:52:51 -- setup/driver.sh@47 -- # local fail=0 00:08:45.849 11:52:51 -- setup/driver.sh@49 -- # pick_driver 00:08:45.849 11:52:51 -- setup/driver.sh@36 -- # vfio 00:08:45.849 11:52:51 -- setup/driver.sh@21 -- # local iommu_grups 00:08:45.849 11:52:51 -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:45.849 11:52:51 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:45.849 11:52:51 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:08:45.849 11:52:51 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:45.849 11:52:51 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:45.849 11:52:51 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:08:45.849 11:52:51 -- setup/driver.sh@32 -- # return 1 00:08:45.849 11:52:51 -- setup/driver.sh@38 -- # uio 00:08:45.849 11:52:51 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:45.849 11:52:51 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:45.849 11:52:51 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:45.849 11:52:51 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:45.849 11:52:51 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:08:45.849 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:08:45.849 11:52:51 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:45.849 11:52:51 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:45.849 11:52:51 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:46.107 Looking for driver=uio_pci_generic 00:08:46.107 11:52:51 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:46.107 11:52:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:46.107 11:52:51 -- setup/driver.sh@45 -- # setup output config 00:08:46.107 11:52:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:46.107 11:52:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:46.365 11:52:51 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:46.365 11:52:51 -- setup/driver.sh@58 -- # continue 00:08:46.365 11:52:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:46.365 11:52:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:46.365 11:52:51 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:46.365 11:52:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:47.738 11:52:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:47.738 11:52:52 -- setup/driver.sh@65 -- # setup reset 00:08:47.738 11:52:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:47.738 11:52:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.995 00:08:47.995 real 0m2.018s 00:08:47.995 user 0m0.466s 00:08:47.995 sys 0m1.539s 00:08:47.995 ************************************ 00:08:47.995 END TEST guess_driver 00:08:47.995 ************************************ 00:08:47.995 11:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.995 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.995 00:08:47.995 real 0m2.740s 00:08:47.995 user 0m0.912s 00:08:47.995 sys 0m1.814s 00:08:47.995 11:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.995 ************************************ 00:08:47.995 END TEST driver 00:08:47.995 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.995 ************************************ 00:08:47.995 11:52:53 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:47.995 11:52:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.995 11:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.995 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.995 ************************************ 00:08:47.995 START TEST devices 00:08:47.995 ************************************ 00:08:47.995 11:52:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:48.251 * Looking for test storage... 00:08:48.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:48.251 11:52:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:48.251 11:52:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:48.251 11:52:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:48.251 11:52:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:48.251 11:52:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:48.251 11:52:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:48.251 11:52:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:48.251 11:52:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:48.251 11:52:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:48.251 11:52:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.251 11:52:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:48.251 11:52:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:48.251 11:52:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:48.251 11:52:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:48.251 11:52:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:48.251 11:52:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:48.251 11:52:53 -- scripts/common.sh@344 -- # : 1 00:08:48.251 11:52:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:48.251 11:52:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.251 11:52:53 -- scripts/common.sh@364 -- # decimal 1 00:08:48.251 11:52:53 -- scripts/common.sh@352 -- # local d=1 00:08:48.251 11:52:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.251 11:52:53 -- scripts/common.sh@354 -- # echo 1 00:08:48.251 11:52:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:48.251 11:52:53 -- scripts/common.sh@365 -- # decimal 2 00:08:48.251 11:52:53 -- scripts/common.sh@352 -- # local d=2 00:08:48.251 11:52:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.251 11:52:53 -- scripts/common.sh@354 -- # echo 2 00:08:48.251 11:52:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:48.251 11:52:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:48.251 11:52:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:48.251 11:52:53 -- scripts/common.sh@367 -- # return 0 00:08:48.251 11:52:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.251 11:52:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:48.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.251 --rc genhtml_branch_coverage=1 00:08:48.251 --rc genhtml_function_coverage=1 00:08:48.251 --rc genhtml_legend=1 00:08:48.251 --rc geninfo_all_blocks=1 00:08:48.251 --rc geninfo_unexecuted_blocks=1 00:08:48.251 00:08:48.251 ' 00:08:48.251 11:52:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:48.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.251 --rc genhtml_branch_coverage=1 00:08:48.251 --rc genhtml_function_coverage=1 00:08:48.252 --rc genhtml_legend=1 00:08:48.252 --rc geninfo_all_blocks=1 00:08:48.252 --rc geninfo_unexecuted_blocks=1 00:08:48.252 00:08:48.252 ' 00:08:48.252 11:52:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:48.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.252 --rc genhtml_branch_coverage=1 00:08:48.252 --rc genhtml_function_coverage=1 00:08:48.252 --rc genhtml_legend=1 00:08:48.252 --rc geninfo_all_blocks=1 00:08:48.252 --rc geninfo_unexecuted_blocks=1 00:08:48.252 00:08:48.252 ' 00:08:48.252 11:52:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:48.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.252 --rc genhtml_branch_coverage=1 00:08:48.252 --rc genhtml_function_coverage=1 00:08:48.252 --rc genhtml_legend=1 00:08:48.252 --rc geninfo_all_blocks=1 00:08:48.252 --rc geninfo_unexecuted_blocks=1 00:08:48.252 00:08:48.252 ' 00:08:48.252 11:52:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:48.252 11:52:53 -- setup/devices.sh@192 -- # setup reset 00:08:48.252 11:52:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:48.252 11:52:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:48.816 11:52:54 -- setup/devices.sh@194 -- # get_zoned_devs 00:08:48.816 11:52:54 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:48.816 11:52:54 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:48.816 11:52:54 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:48.816 11:52:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:48.816 11:52:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:48.816 11:52:54 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:48.816 11:52:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:48.816 11:52:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:48.816 11:52:54 -- setup/devices.sh@196 -- # blocks=() 00:08:48.816 11:52:54 -- setup/devices.sh@196 -- # declare -a blocks 00:08:48.816 11:52:54 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:48.816 11:52:54 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:48.816 11:52:54 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:48.816 11:52:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:48.816 11:52:54 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:48.817 11:52:54 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:48.817 11:52:54 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:08:48.817 11:52:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:48.817 11:52:54 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:48.817 11:52:54 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:08:48.817 11:52:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:48.817 No valid GPT data, bailing 00:08:48.817 11:52:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:48.817 11:52:54 -- scripts/common.sh@393 -- # pt= 00:08:48.817 11:52:54 -- scripts/common.sh@394 -- # return 1 00:08:48.817 11:52:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:48.817 11:52:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:48.817 11:52:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:48.817 11:52:54 -- setup/common.sh@80 -- # echo 5368709120 00:08:48.817 11:52:54 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:48.817 11:52:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:48.817 11:52:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:08:48.817 11:52:54 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:08:48.817 11:52:54 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:48.817 11:52:54 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:48.817 11:52:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.817 11:52:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.817 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:08:48.817 ************************************ 00:08:48.817 START TEST nvme_mount 00:08:48.817 ************************************ 00:08:48.817 11:52:54 -- common/autotest_common.sh@1114 -- # nvme_mount 00:08:48.817 11:52:54 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:48.817 11:52:54 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:48.817 11:52:54 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:48.817 11:52:54 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:48.817 11:52:54 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:48.817 11:52:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:48.817 11:52:54 -- setup/common.sh@40 -- # local part_no=1 00:08:48.817 11:52:54 -- setup/common.sh@41 -- # local size=1073741824 00:08:48.817 11:52:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:48.817 11:52:54 -- setup/common.sh@44 -- # parts=() 00:08:48.817 11:52:54 -- setup/common.sh@44 -- # local parts 00:08:48.817 11:52:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:48.817 11:52:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:48.817 11:52:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:48.817 11:52:54 -- setup/common.sh@46 -- # (( part++ )) 00:08:48.817 11:52:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:48.817 11:52:54 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:48.817 11:52:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:48.817 11:52:54 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:49.750 Creating new GPT entries in memory. 00:08:49.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:49.750 other utilities. 00:08:49.750 11:52:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:49.750 11:52:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:49.750 11:52:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:49.750 11:52:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:49.750 11:52:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:51.137 Creating new GPT entries in memory. 00:08:51.137 The operation has completed successfully. 00:08:51.137 11:52:56 -- setup/common.sh@57 -- # (( part++ )) 00:08:51.137 11:52:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:51.137 11:52:56 -- setup/common.sh@62 -- # wait 108674 00:08:51.137 11:52:56 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.137 11:52:56 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:51.137 11:52:56 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.137 11:52:56 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:51.137 11:52:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:51.137 11:52:56 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.137 11:52:56 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:51.137 11:52:56 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:51.137 11:52:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:51.137 11:52:56 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.137 11:52:56 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:51.137 11:52:56 -- setup/devices.sh@53 -- # local found=0 00:08:51.137 11:52:56 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:51.137 11:52:56 -- setup/devices.sh@56 -- # : 00:08:51.137 11:52:56 -- setup/devices.sh@59 -- # local pci status 00:08:51.137 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.137 11:52:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:51.137 11:52:56 -- setup/devices.sh@47 -- # setup output config 00:08:51.137 11:52:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:51.137 11:52:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:51.137 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.137 11:52:56 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:51.137 11:52:56 -- setup/devices.sh@63 -- # found=1 00:08:51.137 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.137 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.137 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.137 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.137 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.513 11:52:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:52.513 11:52:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:52.513 11:52:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.513 11:52:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:52.513 11:52:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:52.513 11:52:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:08:52.513 11:52:57 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.513 11:52:57 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.513 11:52:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:52.513 11:52:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:52.513 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:52.513 11:52:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:52.513 11:52:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:52.513 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:52.513 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:52.513 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:52.513 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:52.513 11:52:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:52.513 11:52:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:52.513 11:52:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.513 11:52:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:52.513 11:52:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:52.513 11:52:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.513 11:52:57 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:52.513 11:52:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:52.513 11:52:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:52.513 11:52:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.513 11:52:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:52.513 11:52:57 -- setup/devices.sh@53 -- # local found=0 00:08:52.513 11:52:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:52.513 11:52:57 -- setup/devices.sh@56 -- # : 00:08:52.513 11:52:57 -- setup/devices.sh@59 -- # local pci status 00:08:52.513 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.513 11:52:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:52.513 11:52:57 -- setup/devices.sh@47 -- # setup output config 00:08:52.513 11:52:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:52.513 11:52:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:52.513 11:52:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:52.513 11:52:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:52.513 11:52:57 -- setup/devices.sh@63 -- # found=1 00:08:52.513 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.513 11:52:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:52.513 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.771 11:52:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:52.771 11:52:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:53.702 11:52:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:53.702 11:52:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:53.702 11:52:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:53.702 11:52:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:53.702 11:52:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:53.702 11:52:59 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:53.702 11:52:59 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:08:53.702 11:52:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:53.702 11:52:59 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:53.702 11:52:59 -- setup/devices.sh@50 -- # local mount_point= 00:08:53.702 11:52:59 -- setup/devices.sh@51 -- # local test_file= 00:08:53.702 11:52:59 -- setup/devices.sh@53 -- # local found=0 00:08:53.702 11:52:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:53.702 11:52:59 -- setup/devices.sh@59 -- # local pci status 00:08:53.702 11:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:53.702 11:52:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:53.702 11:52:59 -- setup/devices.sh@47 -- # setup output config 00:08:53.702 11:52:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:53.702 11:52:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:53.960 11:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:53.960 11:52:59 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:53.960 11:52:59 -- setup/devices.sh@63 -- # found=1 00:08:53.960 11:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:53.960 11:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:53.960 11:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:54.218 11:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:54.218 11:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.179 11:53:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:55.179 11:53:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:55.179 11:53:00 -- setup/devices.sh@68 -- # return 0 00:08:55.179 11:53:00 -- setup/devices.sh@128 -- # cleanup_nvme 00:08:55.179 11:53:00 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:55.179 11:53:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:55.179 11:53:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:55.179 11:53:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:55.179 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:55.179 00:08:55.179 real 0m6.417s 00:08:55.179 user 0m0.716s 00:08:55.179 sys 0m3.737s 00:08:55.179 11:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.179 ************************************ 00:08:55.179 END TEST nvme_mount 00:08:55.179 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:08:55.179 ************************************ 00:08:55.179 11:53:00 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:55.179 11:53:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.179 11:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.179 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:08:55.179 ************************************ 00:08:55.179 START TEST dm_mount 00:08:55.179 ************************************ 00:08:55.179 11:53:00 -- common/autotest_common.sh@1114 -- # dm_mount 00:08:55.179 11:53:00 -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:55.179 11:53:00 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:55.179 11:53:00 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:55.179 11:53:00 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:55.179 11:53:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:55.179 11:53:00 -- setup/common.sh@40 -- # local part_no=2 00:08:55.179 11:53:00 -- setup/common.sh@41 -- # local size=1073741824 00:08:55.179 11:53:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:55.179 11:53:00 -- setup/common.sh@44 -- # parts=() 00:08:55.179 11:53:00 -- setup/common.sh@44 -- # local parts 00:08:55.179 11:53:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:55.179 11:53:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:55.179 11:53:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:55.179 11:53:00 -- setup/common.sh@46 -- # (( part++ )) 00:08:55.179 11:53:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:55.179 11:53:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:55.179 11:53:00 -- setup/common.sh@46 -- # (( part++ )) 00:08:55.179 11:53:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:55.179 11:53:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:55.179 11:53:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:55.180 11:53:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:56.556 Creating new GPT entries in memory. 00:08:56.556 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:56.556 other utilities. 00:08:56.556 11:53:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:56.556 11:53:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:56.556 11:53:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:56.556 11:53:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:56.556 11:53:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:57.490 Creating new GPT entries in memory. 00:08:57.490 The operation has completed successfully. 00:08:57.490 11:53:02 -- setup/common.sh@57 -- # (( part++ )) 00:08:57.490 11:53:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:57.490 11:53:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:57.490 11:53:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:57.490 11:53:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:58.424 The operation has completed successfully. 00:08:58.424 11:53:03 -- setup/common.sh@57 -- # (( part++ )) 00:08:58.424 11:53:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:58.424 11:53:03 -- setup/common.sh@62 -- # wait 109161 00:08:58.424 11:53:03 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:58.424 11:53:03 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:58.424 11:53:03 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:58.424 11:53:03 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:58.424 11:53:03 -- setup/devices.sh@160 -- # for t in {1..5} 00:08:58.424 11:53:03 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:58.424 11:53:03 -- setup/devices.sh@161 -- # break 00:08:58.424 11:53:03 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:58.424 11:53:03 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:58.424 11:53:03 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:58.424 11:53:03 -- setup/devices.sh@166 -- # dm=dm-0 00:08:58.424 11:53:03 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:58.424 11:53:03 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:58.424 11:53:03 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:58.424 11:53:03 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:58.424 11:53:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:58.424 11:53:03 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:58.424 11:53:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:58.424 11:53:03 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:58.424 11:53:03 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:58.424 11:53:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:58.424 11:53:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:58.424 11:53:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:58.424 11:53:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:58.424 11:53:03 -- setup/devices.sh@53 -- # local found=0 00:08:58.424 11:53:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:58.424 11:53:03 -- setup/devices.sh@56 -- # : 00:08:58.424 11:53:03 -- setup/devices.sh@59 -- # local pci status 00:08:58.424 11:53:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:58.424 11:53:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:58.424 11:53:03 -- setup/devices.sh@47 -- # setup output config 00:08:58.424 11:53:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:58.424 11:53:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:58.681 11:53:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:58.682 11:53:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:58.682 11:53:03 -- setup/devices.sh@63 -- # found=1 00:08:58.682 11:53:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:58.682 11:53:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:58.682 11:53:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:58.682 11:53:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:58.682 11:53:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:00.054 11:53:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:00.054 11:53:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:09:00.054 11:53:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:00.054 11:53:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:09:00.054 11:53:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:09:00.054 11:53:05 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:00.054 11:53:05 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:09:00.054 11:53:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:09:00.054 11:53:05 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:09:00.054 11:53:05 -- setup/devices.sh@50 -- # local mount_point= 00:09:00.054 11:53:05 -- setup/devices.sh@51 -- # local test_file= 00:09:00.054 11:53:05 -- setup/devices.sh@53 -- # local found=0 00:09:00.054 11:53:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:00.054 11:53:05 -- setup/devices.sh@59 -- # local pci status 00:09:00.054 11:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:00.054 11:53:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:09:00.054 11:53:05 -- setup/devices.sh@47 -- # setup output config 00:09:00.054 11:53:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:00.054 11:53:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:00.054 11:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:09:00.054 11:53:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:09:00.054 11:53:05 -- setup/devices.sh@63 -- # found=1 00:09:00.054 11:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:00.054 11:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:09:00.054 11:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:00.054 11:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:09:00.054 11:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:00.988 11:53:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:00.988 11:53:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:00.988 11:53:06 -- setup/devices.sh@68 -- # return 0 00:09:00.988 11:53:06 -- setup/devices.sh@187 -- # cleanup_dm 00:09:00.988 11:53:06 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:00.988 11:53:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:00.988 11:53:06 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:09:01.246 11:53:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:09:01.246 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:01.246 11:53:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:09:01.246 00:09:01.246 real 0m5.891s 00:09:01.246 user 0m0.438s 00:09:01.246 sys 0m2.386s 00:09:01.246 11:53:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.246 ************************************ 00:09:01.246 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.246 END TEST dm_mount 00:09:01.246 ************************************ 00:09:01.246 11:53:06 -- setup/devices.sh@1 -- # cleanup 00:09:01.246 11:53:06 -- setup/devices.sh@11 -- # cleanup_nvme 00:09:01.246 11:53:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:01.246 11:53:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:01.246 11:53:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:01.246 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:01.246 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:01.246 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:01.246 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:01.246 11:53:06 -- setup/devices.sh@12 -- # cleanup_dm 00:09:01.246 11:53:06 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:01.246 11:53:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:01.246 11:53:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:09:01.246 11:53:06 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:09:01.246 00:09:01.246 real 0m13.187s 00:09:01.246 user 0m1.614s 00:09:01.246 sys 0m6.538s 00:09:01.246 11:53:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.246 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.246 ************************************ 00:09:01.246 END TEST devices 00:09:01.246 ************************************ 00:09:01.246 00:09:01.246 real 0m28.677s 00:09:01.246 user 0m6.844s 00:09:01.246 sys 0m16.460s 00:09:01.246 11:53:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.246 ************************************ 00:09:01.246 END TEST setup.sh 00:09:01.246 ************************************ 00:09:01.246 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.246 11:53:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:01.503 Hugepages 00:09:01.503 node hugesize free / total 00:09:01.503 node0 1048576kB 0 / 0 00:09:01.503 node0 2048kB 2048 / 2048 00:09:01.503 00:09:01.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:01.504 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:01.504 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:01.504 11:53:06 -- spdk/autotest.sh@128 -- # uname -s 00:09:01.504 11:53:06 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:09:01.504 11:53:06 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:09:01.504 11:53:06 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:02.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:09:02.069 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.003 11:53:08 -- common/autotest_common.sh@1527 -- # sleep 1 00:09:04.397 11:53:09 -- common/autotest_common.sh@1528 -- # bdfs=() 00:09:04.397 11:53:09 -- common/autotest_common.sh@1528 -- # local bdfs 00:09:04.397 11:53:09 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:09:04.397 11:53:09 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:09:04.397 11:53:09 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:04.397 11:53:09 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:04.397 11:53:09 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:04.397 11:53:09 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:04.397 11:53:09 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:04.397 11:53:09 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:09:04.397 11:53:09 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:09:04.397 11:53:09 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:04.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:09:04.397 Waiting for block devices as requested 00:09:04.655 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.655 11:53:09 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:09:04.655 11:53:09 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:09:04.655 11:53:09 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:09:04.655 11:53:09 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:09:04.655 11:53:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1540 -- # grep oacs 00:09:04.655 11:53:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:04.655 11:53:09 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:09:04.655 11:53:09 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:09:04.655 11:53:09 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:09:04.655 11:53:09 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:09:04.655 11:53:09 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:09:04.655 11:53:09 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:09:04.655 11:53:10 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:09:04.655 11:53:10 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:09:04.655 11:53:10 -- common/autotest_common.sh@1552 -- # continue 00:09:04.655 11:53:10 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:09:04.656 11:53:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.656 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:04.656 11:53:10 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:09:04.656 11:53:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.656 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:04.656 11:53:10 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:04.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:09:05.173 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.180 11:53:11 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:09:06.180 11:53:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.180 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:06.180 11:53:11 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:09:06.180 11:53:11 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:09:06.180 11:53:11 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:09:06.180 11:53:11 -- common/autotest_common.sh@1572 -- # bdfs=() 00:09:06.180 11:53:11 -- common/autotest_common.sh@1572 -- # local bdfs 00:09:06.180 11:53:11 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:09:06.180 11:53:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:06.180 11:53:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:06.180 11:53:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:06.180 11:53:11 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.180 11:53:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:06.180 11:53:11 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:09:06.180 11:53:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:09:06.180 11:53:11 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:09:06.180 11:53:11 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:09:06.180 11:53:11 -- common/autotest_common.sh@1575 -- # device=0x0010 00:09:06.180 11:53:11 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:06.180 11:53:11 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:09:06.180 11:53:11 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:09:06.180 11:53:11 -- common/autotest_common.sh@1588 -- # return 0 00:09:06.180 11:53:11 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:09:06.180 11:53:11 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:06.180 11:53:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:06.180 11:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.180 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:06.180 ************************************ 00:09:06.180 START TEST unittest 00:09:06.180 ************************************ 00:09:06.180 11:53:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:06.180 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:06.180 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:09:06.180 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:09:06.180 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:06.180 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:09:06.180 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:06.180 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:06.180 ++ rpc_py=rpc_cmd 00:09:06.180 ++ set -e 00:09:06.180 ++ shopt -s nullglob 00:09:06.180 ++ shopt -s extglob 00:09:06.180 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:06.180 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:06.180 +++ CONFIG_WPDK_DIR= 00:09:06.180 +++ CONFIG_ASAN=y 00:09:06.180 +++ CONFIG_VBDEV_COMPRESS=n 00:09:06.180 +++ CONFIG_HAVE_EXECINFO_H=y 00:09:06.180 +++ CONFIG_USDT=n 00:09:06.180 +++ CONFIG_CUSTOMOCF=n 00:09:06.180 +++ CONFIG_PREFIX=/usr/local 00:09:06.180 +++ CONFIG_RBD=n 00:09:06.180 +++ CONFIG_LIBDIR= 00:09:06.180 +++ CONFIG_IDXD=y 00:09:06.180 +++ CONFIG_NVME_CUSE=y 00:09:06.180 +++ CONFIG_SMA=n 00:09:06.180 +++ CONFIG_VTUNE=n 00:09:06.180 +++ CONFIG_TSAN=n 00:09:06.180 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:06.180 +++ CONFIG_VFIO_USER_DIR= 00:09:06.180 +++ CONFIG_PGO_CAPTURE=n 00:09:06.180 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:06.180 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:06.180 +++ CONFIG_LTO=n 00:09:06.180 +++ CONFIG_ISCSI_INITIATOR=y 00:09:06.180 +++ CONFIG_CET=n 00:09:06.180 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:06.180 +++ CONFIG_OCF_PATH= 00:09:06.180 +++ CONFIG_RDMA_SET_TOS=y 00:09:06.180 +++ CONFIG_HAVE_ARC4RANDOM=n 00:09:06.180 +++ CONFIG_HAVE_LIBARCHIVE=n 00:09:06.180 +++ CONFIG_UBLK=n 00:09:06.180 +++ CONFIG_ISAL_CRYPTO=y 00:09:06.180 +++ CONFIG_OPENSSL_PATH= 00:09:06.180 +++ CONFIG_OCF=n 00:09:06.180 +++ CONFIG_FUSE=n 00:09:06.180 +++ CONFIG_VTUNE_DIR= 00:09:06.180 +++ CONFIG_FUZZER_LIB= 00:09:06.180 +++ CONFIG_FUZZER=n 00:09:06.180 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:09:06.180 +++ CONFIG_CRYPTO=n 00:09:06.180 +++ CONFIG_PGO_USE=n 00:09:06.180 +++ CONFIG_VHOST=y 00:09:06.180 +++ CONFIG_DAOS=n 00:09:06.180 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:09:06.180 +++ CONFIG_DAOS_DIR= 00:09:06.180 +++ CONFIG_UNIT_TESTS=y 00:09:06.180 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:06.180 +++ CONFIG_VIRTIO=y 00:09:06.180 +++ CONFIG_COVERAGE=y 00:09:06.180 +++ CONFIG_RDMA=y 00:09:06.180 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:06.180 +++ CONFIG_URING_PATH= 00:09:06.180 +++ CONFIG_XNVME=n 00:09:06.180 +++ CONFIG_VFIO_USER=n 00:09:06.180 +++ CONFIG_ARCH=native 00:09:06.180 +++ CONFIG_URING_ZNS=n 00:09:06.180 +++ CONFIG_WERROR=y 00:09:06.180 +++ CONFIG_HAVE_LIBBSD=n 00:09:06.180 +++ CONFIG_UBSAN=y 00:09:06.180 +++ CONFIG_IPSEC_MB_DIR= 00:09:06.180 +++ CONFIG_GOLANG=n 00:09:06.180 +++ CONFIG_ISAL=y 00:09:06.180 +++ CONFIG_IDXD_KERNEL=n 00:09:06.180 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:09:06.180 +++ CONFIG_RDMA_PROV=verbs 00:09:06.180 +++ CONFIG_APPS=y 00:09:06.180 +++ CONFIG_SHARED=n 00:09:06.180 +++ CONFIG_FC_PATH= 00:09:06.180 +++ CONFIG_DPDK_PKG_CONFIG=n 00:09:06.180 +++ CONFIG_FC=n 00:09:06.180 +++ CONFIG_AVAHI=n 00:09:06.180 +++ CONFIG_FIO_PLUGIN=y 00:09:06.180 +++ CONFIG_RAID5F=y 00:09:06.180 +++ CONFIG_EXAMPLES=y 00:09:06.180 +++ CONFIG_TESTS=y 00:09:06.180 +++ CONFIG_CRYPTO_MLX5=n 00:09:06.180 +++ CONFIG_MAX_LCORES= 00:09:06.180 +++ CONFIG_IPSEC_MB=n 00:09:06.180 +++ CONFIG_DEBUG=y 00:09:06.180 +++ CONFIG_DPDK_COMPRESSDEV=n 00:09:06.180 +++ CONFIG_CROSS_PREFIX= 00:09:06.180 +++ CONFIG_URING=n 00:09:06.180 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:06.181 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:06.181 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:06.181 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:06.181 +++ _root=/home/vagrant/spdk_repo/spdk 00:09:06.181 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:06.181 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:06.181 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:06.181 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:06.181 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:06.181 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:06.181 +++ VHOST_APP=("$_app_dir/vhost") 00:09:06.181 +++ DD_APP=("$_app_dir/spdk_dd") 00:09:06.181 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:09:06.181 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:06.181 +++ [[ #ifndef SPDK_CONFIG_H 00:09:06.181 #define SPDK_CONFIG_H 00:09:06.181 #define SPDK_CONFIG_APPS 1 00:09:06.181 #define SPDK_CONFIG_ARCH native 00:09:06.181 #define SPDK_CONFIG_ASAN 1 00:09:06.181 #undef SPDK_CONFIG_AVAHI 00:09:06.181 #undef SPDK_CONFIG_CET 00:09:06.181 #define SPDK_CONFIG_COVERAGE 1 00:09:06.181 #define SPDK_CONFIG_CROSS_PREFIX 00:09:06.181 #undef SPDK_CONFIG_CRYPTO 00:09:06.181 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:06.181 #undef SPDK_CONFIG_CUSTOMOCF 00:09:06.181 #undef SPDK_CONFIG_DAOS 00:09:06.181 #define SPDK_CONFIG_DAOS_DIR 00:09:06.181 #define SPDK_CONFIG_DEBUG 1 00:09:06.181 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:06.181 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:09:06.181 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:09:06.181 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:09:06.181 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:06.181 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:06.181 #define SPDK_CONFIG_EXAMPLES 1 00:09:06.181 #undef SPDK_CONFIG_FC 00:09:06.181 #define SPDK_CONFIG_FC_PATH 00:09:06.181 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:06.181 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:06.181 #undef SPDK_CONFIG_FUSE 00:09:06.181 #undef SPDK_CONFIG_FUZZER 00:09:06.181 #define SPDK_CONFIG_FUZZER_LIB 00:09:06.181 #undef SPDK_CONFIG_GOLANG 00:09:06.181 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:09:06.181 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:06.181 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:06.181 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:06.181 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:06.181 #define SPDK_CONFIG_IDXD 1 00:09:06.181 #undef SPDK_CONFIG_IDXD_KERNEL 00:09:06.181 #undef SPDK_CONFIG_IPSEC_MB 00:09:06.181 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:06.181 #define SPDK_CONFIG_ISAL 1 00:09:06.181 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:06.181 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:06.181 #define SPDK_CONFIG_LIBDIR 00:09:06.181 #undef SPDK_CONFIG_LTO 00:09:06.181 #define SPDK_CONFIG_MAX_LCORES 00:09:06.181 #define SPDK_CONFIG_NVME_CUSE 1 00:09:06.181 #undef SPDK_CONFIG_OCF 00:09:06.181 #define SPDK_CONFIG_OCF_PATH 00:09:06.181 #define SPDK_CONFIG_OPENSSL_PATH 00:09:06.181 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:06.181 #undef SPDK_CONFIG_PGO_USE 00:09:06.181 #define SPDK_CONFIG_PREFIX /usr/local 00:09:06.181 #define SPDK_CONFIG_RAID5F 1 00:09:06.181 #undef SPDK_CONFIG_RBD 00:09:06.181 #define SPDK_CONFIG_RDMA 1 00:09:06.181 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:06.181 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:06.181 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:06.181 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:06.181 #undef SPDK_CONFIG_SHARED 00:09:06.181 #undef SPDK_CONFIG_SMA 00:09:06.181 #define SPDK_CONFIG_TESTS 1 00:09:06.181 #undef SPDK_CONFIG_TSAN 00:09:06.181 #undef SPDK_CONFIG_UBLK 00:09:06.181 #define SPDK_CONFIG_UBSAN 1 00:09:06.181 #define SPDK_CONFIG_UNIT_TESTS 1 00:09:06.181 #undef SPDK_CONFIG_URING 00:09:06.181 #define SPDK_CONFIG_URING_PATH 00:09:06.181 #undef SPDK_CONFIG_URING_ZNS 00:09:06.181 #undef SPDK_CONFIG_USDT 00:09:06.181 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:06.181 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:06.181 #undef SPDK_CONFIG_VFIO_USER 00:09:06.181 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:06.181 #define SPDK_CONFIG_VHOST 1 00:09:06.181 #define SPDK_CONFIG_VIRTIO 1 00:09:06.181 #undef SPDK_CONFIG_VTUNE 00:09:06.181 #define SPDK_CONFIG_VTUNE_DIR 00:09:06.181 #define SPDK_CONFIG_WERROR 1 00:09:06.181 #define SPDK_CONFIG_WPDK_DIR 00:09:06.181 #undef SPDK_CONFIG_XNVME 00:09:06.181 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:06.181 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:06.181 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.181 +++ [[ -e /bin/wpdk_common.sh ]] 00:09:06.181 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.181 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.181 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:06.181 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:06.181 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:06.181 ++++ export PATH 00:09:06.181 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:06.181 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:06.181 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:06.181 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:06.181 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:06.181 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:06.181 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:06.181 +++ TEST_TAG=N/A 00:09:06.181 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:06.181 ++ : 1 00:09:06.181 ++ export RUN_NIGHTLY 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_RUN_VALGRIND 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_TEST_UNITTEST 00:09:06.181 ++ : 00:09:06.181 ++ export SPDK_TEST_AUTOBUILD 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_RELEASE_BUILD 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_ISAL 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_ISCSI 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_ISCSI_INITIATOR 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_TEST_NVME 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVME_PMR 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVME_BP 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVME_CLI 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVME_CUSE 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVME_FDP 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVMF 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_VFIOUSER 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_VFIOUSER_QEMU 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_FUZZER 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_FUZZER_SHORT 00:09:06.181 ++ : rdma 00:09:06.181 ++ export SPDK_TEST_NVMF_TRANSPORT 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_RBD 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_VHOST 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_TEST_BLOCKDEV 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_IOAT 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_BLOBFS 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_VHOST_INIT 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_LVOL 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_VBDEV_COMPRESS 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_RUN_ASAN 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_RUN_UBSAN 00:09:06.181 ++ : /home/vagrant/spdk_repo/dpdk/build 00:09:06.181 ++ export SPDK_RUN_EXTERNAL_DPDK 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_RUN_NON_ROOT 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_CRYPTO 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_FTL 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_OCF 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_VMD 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_OPAL 00:09:06.181 ++ : v22.11.4 00:09:06.181 ++ export SPDK_TEST_NATIVE_DPDK 00:09:06.181 ++ : true 00:09:06.181 ++ export SPDK_AUTOTEST_X 00:09:06.181 ++ : 1 00:09:06.181 ++ export SPDK_TEST_RAID5 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_URING 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_USDT 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_USE_IGB_UIO 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_SCHEDULER 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_SCANBUILD 00:09:06.181 ++ : 00:09:06.181 ++ export SPDK_TEST_NVMF_NICS 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_SMA 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_DAOS 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_XNVME 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_ACCEL_DSA 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_ACCEL_IAA 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_ACCEL_IOAT 00:09:06.181 ++ : 00:09:06.181 ++ export SPDK_TEST_FUZZER_TARGET 00:09:06.181 ++ : 0 00:09:06.181 ++ export SPDK_TEST_NVMF_MDNS 00:09:06.181 ++ : 0 00:09:06.182 ++ export SPDK_JSONRPC_GO_CLIENT 00:09:06.182 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:06.182 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:06.182 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:09:06.182 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:09:06.182 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:06.182 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:06.182 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:06.182 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:06.182 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:06.182 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:09:06.182 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:06.182 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:06.182 ++ export PYTHONDONTWRITEBYTECODE=1 00:09:06.182 ++ PYTHONDONTWRITEBYTECODE=1 00:09:06.182 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:06.182 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:06.182 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:06.182 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:06.182 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:09:06.182 ++ rm -rf /var/tmp/asan_suppression_file 00:09:06.182 ++ cat 00:09:06.182 ++ echo leak:libfuse3.so 00:09:06.182 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:06.182 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:06.182 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:06.182 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:06.182 ++ '[' -z /var/spdk/dependencies ']' 00:09:06.182 ++ export DEPENDENCY_DIR 00:09:06.182 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:06.182 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:06.182 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:06.182 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:06.182 ++ export QEMU_BIN= 00:09:06.182 ++ QEMU_BIN= 00:09:06.182 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:09:06.182 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:09:06.182 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:06.182 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:06.182 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:06.182 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:06.182 ++ _LCOV_MAIN=0 00:09:06.182 ++ _LCOV_LLVM=1 00:09:06.182 ++ _LCOV= 00:09:06.182 ++ [[ '' == *clang* ]] 00:09:06.182 ++ [[ 0 -eq 1 ]] 00:09:06.182 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:06.182 ++ _lcov_opt[_LCOV_MAIN]= 00:09:06.182 ++ lcov_opt= 00:09:06.182 ++ '[' 0 -eq 0 ']' 00:09:06.182 ++ export valgrind= 00:09:06.182 ++ valgrind= 00:09:06.182 +++ uname -s 00:09:06.182 ++ '[' Linux = Linux ']' 00:09:06.182 ++ HUGEMEM=4096 00:09:06.182 ++ export CLEAR_HUGE=yes 00:09:06.182 ++ CLEAR_HUGE=yes 00:09:06.182 ++ [[ 0 -eq 1 ]] 00:09:06.182 ++ [[ 0 -eq 1 ]] 00:09:06.182 ++ MAKE=make 00:09:06.182 +++ nproc 00:09:06.182 ++ MAKEFLAGS=-j10 00:09:06.182 ++ export HUGEMEM=4096 00:09:06.182 ++ HUGEMEM=4096 00:09:06.465 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:06.465 ++ NO_HUGE=() 00:09:06.465 ++ TEST_MODE= 00:09:06.465 ++ [[ -z '' ]] 00:09:06.465 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:09:06.465 ++ exec 00:09:06.465 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:09:06.465 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:09:06.465 ++ set_test_storage 2147483648 00:09:06.465 ++ [[ -v testdir ]] 00:09:06.465 ++ local requested_size=2147483648 00:09:06.465 ++ local mount target_dir 00:09:06.465 ++ local -A mounts fss sizes avails uses 00:09:06.465 ++ local source fs size avail mount use 00:09:06.465 ++ local storage_fallback storage_candidates 00:09:06.465 +++ mktemp -udt spdk.XXXXXX 00:09:06.465 ++ storage_fallback=/tmp/spdk.jYyMHp 00:09:06.465 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:06.465 ++ [[ -n '' ]] 00:09:06.465 ++ [[ -n '' ]] 00:09:06.465 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.jYyMHp/tests/unit /tmp/spdk.jYyMHp 00:09:06.465 ++ requested_size=2214592512 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 +++ df -T 00:09:06.465 +++ grep -v Filesystem 00:09:06.465 ++ mounts["$mount"]=tmpfs 00:09:06.465 ++ fss["$mount"]=tmpfs 00:09:06.465 ++ avails["$mount"]=1252589568 00:09:06.465 ++ sizes["$mount"]=1253679104 00:09:06.465 ++ uses["$mount"]=1089536 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ mounts["$mount"]=/dev/vda1 00:09:06.465 ++ fss["$mount"]=ext4 00:09:06.465 ++ avails["$mount"]=9010241536 00:09:06.465 ++ sizes["$mount"]=20616794112 00:09:06.465 ++ uses["$mount"]=11589775360 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ mounts["$mount"]=tmpfs 00:09:06.465 ++ fss["$mount"]=tmpfs 00:09:06.465 ++ avails["$mount"]=6268391424 00:09:06.465 ++ sizes["$mount"]=6268391424 00:09:06.465 ++ uses["$mount"]=0 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ mounts["$mount"]=tmpfs 00:09:06.465 ++ fss["$mount"]=tmpfs 00:09:06.465 ++ avails["$mount"]=5242880 00:09:06.465 ++ sizes["$mount"]=5242880 00:09:06.465 ++ uses["$mount"]=0 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ mounts["$mount"]=/dev/vda15 00:09:06.465 ++ fss["$mount"]=vfat 00:09:06.465 ++ avails["$mount"]=103061504 00:09:06.465 ++ sizes["$mount"]=109395968 00:09:06.465 ++ uses["$mount"]=6334464 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ mounts["$mount"]=tmpfs 00:09:06.465 ++ fss["$mount"]=tmpfs 00:09:06.465 ++ avails["$mount"]=1253670912 00:09:06.465 ++ sizes["$mount"]=1253675008 00:09:06.465 ++ uses["$mount"]=4096 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:09:06.465 ++ fss["$mount"]=fuse.sshfs 00:09:06.465 ++ avails["$mount"]=93108883456 00:09:06.465 ++ sizes["$mount"]=105088212992 00:09:06.465 ++ uses["$mount"]=6593896448 00:09:06.465 ++ read -r source fs size use avail _ mount 00:09:06.465 ++ printf '* Looking for test storage...\n' 00:09:06.465 * Looking for test storage... 00:09:06.465 ++ local target_space new_size 00:09:06.465 ++ for target_dir in "${storage_candidates[@]}" 00:09:06.465 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:09:06.465 +++ awk '$1 !~ /Filesystem/{print $6}' 00:09:06.465 ++ mount=/ 00:09:06.465 ++ target_space=9010241536 00:09:06.465 ++ (( target_space == 0 || target_space < requested_size )) 00:09:06.465 ++ (( target_space >= requested_size )) 00:09:06.465 ++ [[ ext4 == tmpfs ]] 00:09:06.465 ++ [[ ext4 == ramfs ]] 00:09:06.465 ++ [[ / == / ]] 00:09:06.465 ++ new_size=13804367872 00:09:06.465 ++ (( new_size * 100 / sizes[/] > 95 )) 00:09:06.465 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:09:06.465 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:09:06.465 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:09:06.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:09:06.465 ++ return 0 00:09:06.465 ++ set -o errtrace 00:09:06.465 ++ shopt -s extdebug 00:09:06.465 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:09:06.465 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:06.465 11:53:11 -- common/autotest_common.sh@1682 -- # true 00:09:06.465 11:53:11 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:09:06.465 11:53:11 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:09:06.465 11:53:11 -- common/autotest_common.sh@29 -- # exec 00:09:06.465 11:53:11 -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:06.465 11:53:11 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:06.465 11:53:11 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:06.465 11:53:11 -- common/autotest_common.sh@18 -- # set -x 00:09:06.465 11:53:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:06.465 11:53:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:06.465 11:53:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:06.465 11:53:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:06.465 11:53:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:06.465 11:53:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:06.465 11:53:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:06.465 11:53:11 -- scripts/common.sh@335 -- # IFS=.-: 00:09:06.465 11:53:11 -- scripts/common.sh@335 -- # read -ra ver1 00:09:06.465 11:53:11 -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.465 11:53:11 -- scripts/common.sh@336 -- # read -ra ver2 00:09:06.465 11:53:11 -- scripts/common.sh@337 -- # local 'op=<' 00:09:06.465 11:53:11 -- scripts/common.sh@339 -- # ver1_l=2 00:09:06.465 11:53:11 -- scripts/common.sh@340 -- # ver2_l=1 00:09:06.465 11:53:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:06.465 11:53:11 -- scripts/common.sh@343 -- # case "$op" in 00:09:06.465 11:53:11 -- scripts/common.sh@344 -- # : 1 00:09:06.465 11:53:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:06.465 11:53:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.465 11:53:11 -- scripts/common.sh@364 -- # decimal 1 00:09:06.465 11:53:11 -- scripts/common.sh@352 -- # local d=1 00:09:06.465 11:53:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.465 11:53:11 -- scripts/common.sh@354 -- # echo 1 00:09:06.465 11:53:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:06.465 11:53:11 -- scripts/common.sh@365 -- # decimal 2 00:09:06.465 11:53:11 -- scripts/common.sh@352 -- # local d=2 00:09:06.465 11:53:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.465 11:53:11 -- scripts/common.sh@354 -- # echo 2 00:09:06.465 11:53:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:06.465 11:53:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:06.465 11:53:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:06.465 11:53:11 -- scripts/common.sh@367 -- # return 0 00:09:06.465 11:53:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.465 11:53:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:06.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.465 --rc genhtml_branch_coverage=1 00:09:06.465 --rc genhtml_function_coverage=1 00:09:06.465 --rc genhtml_legend=1 00:09:06.465 --rc geninfo_all_blocks=1 00:09:06.465 --rc geninfo_unexecuted_blocks=1 00:09:06.465 00:09:06.465 ' 00:09:06.465 11:53:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:06.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.465 --rc genhtml_branch_coverage=1 00:09:06.465 --rc genhtml_function_coverage=1 00:09:06.465 --rc genhtml_legend=1 00:09:06.465 --rc geninfo_all_blocks=1 00:09:06.465 --rc geninfo_unexecuted_blocks=1 00:09:06.465 00:09:06.465 ' 00:09:06.465 11:53:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.466 --rc genhtml_branch_coverage=1 00:09:06.466 --rc genhtml_function_coverage=1 00:09:06.466 --rc genhtml_legend=1 00:09:06.466 --rc geninfo_all_blocks=1 00:09:06.466 --rc geninfo_unexecuted_blocks=1 00:09:06.466 00:09:06.466 ' 00:09:06.466 11:53:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.466 --rc genhtml_branch_coverage=1 00:09:06.466 --rc genhtml_function_coverage=1 00:09:06.466 --rc genhtml_legend=1 00:09:06.466 --rc geninfo_all_blocks=1 00:09:06.466 --rc geninfo_unexecuted_blocks=1 00:09:06.466 00:09:06.466 ' 00:09:06.466 11:53:11 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:09:06.466 11:53:11 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:09:06.466 11:53:11 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:09:06.466 11:53:11 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:09:06.466 11:53:11 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:09:06.466 11:53:11 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:06.466 11:53:11 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:06.466 11:53:11 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:09:24.544 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:09:24.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:09:24.545 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:09:24.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:09:24.545 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:09:24.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:09:56.677 11:53:59 -- unit/unittest.sh@182 -- # uname -m 00:09:56.677 11:53:59 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:09:56.677 11:53:59 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:09:56.677 11:53:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.677 11:53:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.677 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:09:56.677 ************************************ 00:09:56.677 START TEST unittest_pci_event 00:09:56.677 ************************************ 00:09:56.677 11:53:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:09:56.677 00:09:56.677 00:09:56.677 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.677 http://cunit.sourceforge.net/ 00:09:56.677 00:09:56.677 00:09:56.677 Suite: pci_event 00:09:56.677 Test: test_pci_parse_event ...[2024-11-29 11:54:00.008179] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:09:56.677 [2024-11-29 11:54:00.008825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:09:56.677 passed 00:09:56.677 00:09:56.677 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.678 suites 1 1 n/a 0 0 00:09:56.678 tests 1 1 1 0 0 00:09:56.678 asserts 15 15 15 0 n/a 00:09:56.678 00:09:56.678 Elapsed time = 0.001 seconds 00:09:56.678 00:09:56.678 real 0m0.035s 00:09:56.678 user 0m0.015s 00:09:56.678 sys 0m0.015s 00:09:56.678 11:54:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.678 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:56.678 ************************************ 00:09:56.678 END TEST unittest_pci_event 00:09:56.678 ************************************ 00:09:56.678 11:54:00 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:09:56.678 11:54:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.678 11:54:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.678 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:56.678 ************************************ 00:09:56.678 START TEST unittest_include 00:09:56.678 ************************************ 00:09:56.678 11:54:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:09:56.678 00:09:56.678 00:09:56.678 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.678 http://cunit.sourceforge.net/ 00:09:56.678 00:09:56.678 00:09:56.678 Suite: histogram 00:09:56.678 Test: histogram_test ...passed 00:09:56.678 Test: histogram_merge ...passed 00:09:56.678 00:09:56.678 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.678 suites 1 1 n/a 0 0 00:09:56.678 tests 2 2 2 0 0 00:09:56.678 asserts 50 50 50 0 n/a 00:09:56.678 00:09:56.678 Elapsed time = 0.008 seconds 00:09:56.678 00:09:56.678 real 0m0.038s 00:09:56.678 user 0m0.025s 00:09:56.678 sys 0m0.013s 00:09:56.678 11:54:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.678 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:56.678 ************************************ 00:09:56.678 END TEST unittest_include 00:09:56.678 ************************************ 00:09:56.678 11:54:00 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:09:56.678 11:54:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.678 11:54:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.678 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:56.678 ************************************ 00:09:56.678 START TEST unittest_bdev 00:09:56.678 ************************************ 00:09:56.678 11:54:00 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:09:56.678 11:54:00 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:09:56.678 00:09:56.678 00:09:56.678 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.678 http://cunit.sourceforge.net/ 00:09:56.678 00:09:56.678 00:09:56.678 Suite: bdev 00:09:56.678 Test: bytes_to_blocks_test ...passed 00:09:56.678 Test: num_blocks_test ...passed 00:09:56.678 Test: io_valid_test ...passed 00:09:56.678 Test: open_write_test ...[2024-11-29 11:54:00.267450] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:09:56.678 [2024-11-29 11:54:00.267763] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:09:56.678 [2024-11-29 11:54:00.267912] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:09:56.678 passed 00:09:56.678 Test: claim_test ...passed 00:09:56.678 Test: alias_add_del_test ...[2024-11-29 11:54:00.371199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:09:56.678 [2024-11-29 11:54:00.371389] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:09:56.678 [2024-11-29 11:54:00.371432] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:09:56.678 passed 00:09:56.678 Test: get_device_stat_test ...passed 00:09:56.678 Test: bdev_io_types_test ...passed 00:09:56.678 Test: bdev_io_wait_test ...passed 00:09:56.678 Test: bdev_io_spans_split_test ...passed 00:09:56.678 Test: bdev_io_boundary_split_test ...passed 00:09:56.678 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-29 11:54:00.548979] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:09:56.678 passed 00:09:56.678 Test: bdev_io_mix_split_test ...passed 00:09:56.678 Test: bdev_io_split_with_io_wait ...passed 00:09:56.678 Test: bdev_io_write_unit_split_test ...[2024-11-29 11:54:00.682998] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:09:56.678 [2024-11-29 11:54:00.683139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:09:56.678 [2024-11-29 11:54:00.683181] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:09:56.678 [2024-11-29 11:54:00.683229] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:09:56.678 passed 00:09:56.678 Test: bdev_io_alignment_with_boundary ...passed 00:09:56.678 Test: bdev_io_alignment ...passed 00:09:56.678 Test: bdev_histograms ...passed 00:09:56.678 Test: bdev_write_zeroes ...passed 00:09:56.678 Test: bdev_compare_and_write ...passed 00:09:56.678 Test: bdev_compare ...passed 00:09:56.678 Test: bdev_compare_emulated ...passed 00:09:56.678 Test: bdev_zcopy_write ...passed 00:09:56.678 Test: bdev_zcopy_read ...passed 00:09:56.678 Test: bdev_open_while_hotremove ...passed 00:09:56.678 Test: bdev_close_while_hotremove ...passed 00:09:56.678 Test: bdev_open_ext_test ...[2024-11-29 11:54:01.188725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:09:56.678 passed 00:09:56.678 Test: bdev_open_ext_unregister ...[2024-11-29 11:54:01.189082] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:09:56.678 passed 00:09:56.678 Test: bdev_set_io_timeout ...passed 00:09:56.678 Test: bdev_set_qd_sampling ...passed 00:09:56.678 Test: lba_range_overlap ...passed 00:09:56.678 Test: lock_lba_range_check_ranges ...passed 00:09:56.678 Test: lock_lba_range_with_io_outstanding ...passed 00:09:56.678 Test: lock_lba_range_overlapped ...passed 00:09:56.678 Test: bdev_quiesce ...[2024-11-29 11:54:01.411101] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:09:56.678 passed 00:09:56.678 Test: bdev_io_abort ...passed 00:09:56.678 Test: bdev_unmap ...passed 00:09:56.678 Test: bdev_write_zeroes_split_test ...passed 00:09:56.678 Test: bdev_set_options_test ...passed 00:09:56.678 Test: bdev_get_memory_domains ...passed 00:09:56.678 Test: bdev_io_ext ...[2024-11-29 11:54:01.552781] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:09:56.678 passed 00:09:56.678 Test: bdev_io_ext_no_opts ...passed 00:09:56.678 Test: bdev_io_ext_invalid_opts ...passed 00:09:56.678 Test: bdev_io_ext_split ...passed 00:09:56.678 Test: bdev_io_ext_bounce_buffer ...passed 00:09:56.679 Test: bdev_register_uuid_alias ...[2024-11-29 11:54:01.768724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name e93b1c34-7a4c-43c2-b8d5-8972d1dc7dfd already exists 00:09:56.679 [2024-11-29 11:54:01.768830] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:e93b1c34-7a4c-43c2-b8d5-8972d1dc7dfd alias for bdev bdev0 00:09:56.679 passed 00:09:56.679 Test: bdev_unregister_by_name ...[2024-11-29 11:54:01.789879] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:09:56.679 passed 00:09:56.679 Test: for_each_bdev_test ...[2024-11-29 11:54:01.789937] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:09:56.679 passed 00:09:56.679 Test: bdev_seek_test ...passed 00:09:56.679 Test: bdev_copy ...passed 00:09:56.679 Test: bdev_copy_split_test ...passed 00:09:56.679 Test: examine_locks ...passed 00:09:56.679 Test: claim_v2_rwo ...[2024-11-29 11:54:01.910964] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911048] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911075] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911132] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911150] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:09:56.679 passed 00:09:56.679 Test: claim_v2_rom ...[2024-11-29 11:54:01.911357] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911419] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911442] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911465] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911516] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:09:56.679 [2024-11-29 11:54:01.911557] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:56.679 passed 00:09:56.679 Test: claim_v2_rwm ...[2024-11-29 11:54:01.911696] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:09:56.679 [2024-11-29 11:54:01.911757] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911785] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911810] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911829] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911854] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.911888] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:09:56.679 passed 00:09:56.679 Test: claim_v2_existing_writer ...passed 00:09:56.679 Test: claim_v2_existing_v1 ...[2024-11-29 11:54:01.912029] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:56.679 [2024-11-29 11:54:01.912063] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:56.679 [2024-11-29 11:54:01.912163] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.912212] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:56.679 passed 00:09:56.679 Test: claim_v1_existing_v2 ...passed 00:09:56.679 Test: examine_claimed ...[2024-11-29 11:54:01.912232] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.912341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.912393] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.912428] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:56.679 [2024-11-29 11:54:01.912714] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:09:56.679 passed 00:09:56.679 00:09:56.679 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.679 suites 1 1 n/a 0 0 00:09:56.679 tests 59 59 59 0 0 00:09:56.679 asserts 4599 4599 4599 0 n/a 00:09:56.679 00:09:56.679 Elapsed time = 1.724 seconds 00:09:56.679 11:54:01 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:09:56.679 00:09:56.679 00:09:56.679 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.679 http://cunit.sourceforge.net/ 00:09:56.679 00:09:56.679 00:09:56.679 Suite: nvme 00:09:56.679 Test: test_create_ctrlr ...passed 00:09:56.679 Test: test_reset_ctrlr ...[2024-11-29 11:54:01.960671] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 passed 00:09:56.679 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:09:56.679 Test: test_failover_ctrlr ...passed 00:09:56.679 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-29 11:54:01.964488] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 [2024-11-29 11:54:01.964986] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 [2024-11-29 11:54:01.965386] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 passed 00:09:56.679 Test: test_pending_reset ...[2024-11-29 11:54:01.967501] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 [2024-11-29 11:54:01.967999] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 passed 00:09:56.679 Test: test_attach_ctrlr ...[2024-11-29 11:54:01.969637] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:09:56.679 passed 00:09:56.679 Test: test_aer_cb ...passed 00:09:56.679 Test: test_submit_nvme_cmd ...passed 00:09:56.679 Test: test_add_remove_trid ...passed 00:09:56.679 Test: test_abort ...[2024-11-29 11:54:01.974285] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:09:56.679 passed 00:09:56.679 Test: test_get_io_qpair ...passed 00:09:56.679 Test: test_bdev_unregister ...passed 00:09:56.679 Test: test_compare_ns ...passed 00:09:56.679 Test: test_init_ana_log_page ...passed 00:09:56.679 Test: test_get_memory_domains ...passed 00:09:56.679 Test: test_reconnect_qpair ...[2024-11-29 11:54:01.978605] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.679 passed 00:09:56.679 Test: test_create_bdev_ctrlr ...[2024-11-29 11:54:01.979502] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:09:56.679 passed 00:09:56.679 Test: test_add_multi_ns_to_bdev ...[2024-11-29 11:54:01.981347] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:09:56.679 passed 00:09:56.679 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:09:56.679 Test: test_admin_path ...passed 00:09:56.679 Test: test_reset_bdev_ctrlr ...passed 00:09:56.679 Test: test_find_io_path ...passed 00:09:56.679 Test: test_retry_io_if_ana_state_is_updating ...passed 00:09:56.679 Test: test_retry_io_for_io_path_error ...passed 00:09:56.679 Test: test_retry_io_count ...passed 00:09:56.679 Test: test_concurrent_read_ana_log_page ...passed 00:09:56.679 Test: test_retry_io_for_ana_error ...passed 00:09:56.679 Test: test_check_io_error_resiliency_params ...[2024-11-29 11:54:01.991123] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:09:56.679 [2024-11-29 11:54:01.991216] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:09:56.679 [2024-11-29 11:54:01.991257] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:09:56.679 [2024-11-29 11:54:01.991292] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:09:56.679 [2024-11-29 11:54:01.991326] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:09:56.679 [2024-11-29 11:54:01.991372] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:09:56.679 [2024-11-29 11:54:01.991833] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:09:56.679 [2024-11-29 11:54:01.991910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:09:56.679 [2024-11-29 11:54:01.991942] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:09:56.679 passed 00:09:56.679 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:09:56.680 Test: test_reconnect_ctrlr ...[2024-11-29 11:54:01.993296] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.993805] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.994336] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.994510] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.994953] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 passed 00:09:56.680 Test: test_retry_failover_ctrlr ...[2024-11-29 11:54:01.995694] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 passed 00:09:56.680 Test: test_fail_path ...[2024-11-29 11:54:01.996717] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.997018] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.997277] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.997545] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:01.997936] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 passed 00:09:56.680 Test: test_nvme_ns_cmp ...passed 00:09:56.680 Test: test_ana_transition ...passed 00:09:56.680 Test: test_set_preferred_path ...passed 00:09:56.680 Test: test_find_next_io_path ...passed 00:09:56.680 Test: test_find_io_path_min_qd ...passed 00:09:56.680 Test: test_disable_auto_failback ...[2024-11-29 11:54:02.000482] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 passed 00:09:56.680 Test: test_set_multipath_policy ...passed 00:09:56.680 Test: test_uuid_generation ...passed 00:09:56.680 Test: test_retry_io_to_same_path ...passed 00:09:56.680 Test: test_race_between_reset_and_disconnected ...passed 00:09:56.680 Test: test_ctrlr_op_rpc ...passed 00:09:56.680 Test: test_bdev_ctrlr_op_rpc ...passed 00:09:56.680 Test: test_disable_enable_ctrlr ...[2024-11-29 11:54:02.006211] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 [2024-11-29 11:54:02.006548] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:56.680 passed 00:09:56.680 Test: test_delete_ctrlr_done ...passed 00:09:56.680 Test: test_ns_remove_during_reset ...passed 00:09:56.680 00:09:56.680 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.680 suites 1 1 n/a 0 0 00:09:56.680 tests 48 48 48 0 0 00:09:56.680 asserts 3553 3553 3553 0 n/a 00:09:56.680 00:09:56.680 Elapsed time = 0.050 seconds 00:09:56.680 11:54:02 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:09:56.680 Test Options 00:09:56.680 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:09:56.680 00:09:56.680 00:09:56.680 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.680 http://cunit.sourceforge.net/ 00:09:56.680 00:09:56.680 00:09:56.680 Suite: raid 00:09:56.680 Test: test_create_raid ...passed 00:09:56.680 Test: test_create_raid_superblock ...passed 00:09:56.680 Test: test_delete_raid ...passed 00:09:56.680 Test: test_create_raid_invalid_args ...[2024-11-29 11:54:02.053454] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:09:56.680 [2024-11-29 11:54:02.053920] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:09:56.680 [2024-11-29 11:54:02.054419] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:09:56.680 [2024-11-29 11:54:02.054669] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:56.680 [2024-11-29 11:54:02.055497] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:56.680 passed 00:09:56.680 Test: test_delete_raid_invalid_args ...passed 00:09:56.680 Test: test_io_channel ...passed 00:09:56.680 Test: test_reset_io ...passed 00:09:56.680 Test: test_write_io ...passed 00:09:56.680 Test: test_read_io ...passed 00:09:57.615 Test: test_unmap_io ...passed 00:09:57.615 Test: test_io_failure ...[2024-11-29 11:54:03.068292] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:09:57.615 passed 00:09:57.615 Test: test_multi_raid_no_io ...passed 00:09:57.615 Test: test_multi_raid_with_io ...passed 00:09:57.615 Test: test_io_type_supported ...passed 00:09:57.615 Test: test_raid_json_dump_info ...passed 00:09:57.615 Test: test_context_size ...passed 00:09:57.615 Test: test_raid_level_conversions ...passed 00:09:57.615 Test: test_raid_process ...passed 00:09:57.615 Test: test_raid_io_split ...passed 00:09:57.615 00:09:57.615 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.615 suites 1 1 n/a 0 0 00:09:57.615 tests 19 19 19 0 0 00:09:57.615 asserts 177879 177879 177879 0 n/a 00:09:57.615 00:09:57.615 Elapsed time = 1.022 seconds 00:09:57.615 11:54:03 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:09:57.615 00:09:57.615 00:09:57.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.615 http://cunit.sourceforge.net/ 00:09:57.615 00:09:57.615 00:09:57.615 Suite: raid_sb 00:09:57.615 Test: test_raid_bdev_write_superblock ...passed 00:09:57.615 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:57.615 Test: test_raid_bdev_parse_superblock ...[2024-11-29 11:54:03.117383] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:57.615 passed 00:09:57.615 00:09:57.616 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.616 suites 1 1 n/a 0 0 00:09:57.616 tests 3 3 3 0 0 00:09:57.616 asserts 32 32 32 0 n/a 00:09:57.616 00:09:57.616 Elapsed time = 0.002 seconds 00:09:57.875 11:54:03 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:09:57.875 00:09:57.875 00:09:57.876 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.876 http://cunit.sourceforge.net/ 00:09:57.876 00:09:57.876 00:09:57.876 Suite: concat 00:09:57.876 Test: test_concat_start ...passed 00:09:57.876 Test: test_concat_rw ...passed 00:09:57.876 Test: test_concat_null_payload ...passed 00:09:57.876 00:09:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.876 suites 1 1 n/a 0 0 00:09:57.876 tests 3 3 3 0 0 00:09:57.876 asserts 8097 8097 8097 0 n/a 00:09:57.876 00:09:57.876 Elapsed time = 0.006 seconds 00:09:57.876 11:54:03 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:09:57.876 00:09:57.876 00:09:57.876 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.876 http://cunit.sourceforge.net/ 00:09:57.876 00:09:57.876 00:09:57.876 Suite: raid1 00:09:57.876 Test: test_raid1_start ...passed 00:09:57.876 Test: test_raid1_read_balancing ...passed 00:09:57.876 00:09:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.876 suites 1 1 n/a 0 0 00:09:57.876 tests 2 2 2 0 0 00:09:57.876 asserts 2856 2856 2856 0 n/a 00:09:57.876 00:09:57.876 Elapsed time = 0.004 seconds 00:09:57.876 11:54:03 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:09:57.876 00:09:57.876 00:09:57.876 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.876 http://cunit.sourceforge.net/ 00:09:57.876 00:09:57.876 00:09:57.876 Suite: zone 00:09:57.876 Test: test_zone_get_operation ...passed 00:09:57.876 Test: test_bdev_zone_get_info ...passed 00:09:57.876 Test: test_bdev_zone_management ...passed 00:09:57.876 Test: test_bdev_zone_append ...passed 00:09:57.876 Test: test_bdev_zone_append_with_md ...passed 00:09:57.876 Test: test_bdev_zone_appendv ...passed 00:09:57.876 Test: test_bdev_zone_appendv_with_md ...passed 00:09:57.876 Test: test_bdev_io_get_append_location ...passed 00:09:57.876 00:09:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.876 suites 1 1 n/a 0 0 00:09:57.876 tests 8 8 8 0 0 00:09:57.876 asserts 94 94 94 0 n/a 00:09:57.876 00:09:57.876 Elapsed time = 0.000 seconds 00:09:57.876 11:54:03 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:09:57.876 00:09:57.876 00:09:57.876 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.876 http://cunit.sourceforge.net/ 00:09:57.876 00:09:57.876 00:09:57.876 Suite: gpt_parse 00:09:57.876 Test: test_parse_mbr_and_primary ...[2024-11-29 11:54:03.249421] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:57.876 [2024-11-29 11:54:03.250154] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:57.876 [2024-11-29 11:54:03.250291] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:09:57.876 [2024-11-29 11:54:03.250958] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:09:57.876 [2024-11-29 11:54:03.251087] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:09:57.876 [2024-11-29 11:54:03.251572] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:09:57.876 passed 00:09:57.876 Test: test_parse_secondary ...[2024-11-29 11:54:03.252541] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:09:57.876 [2024-11-29 11:54:03.252623] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:09:57.876 [2024-11-29 11:54:03.252674] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:09:57.876 [2024-11-29 11:54:03.252994] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:09:57.876 passed 00:09:57.876 Test: test_check_mbr ...[2024-11-29 11:54:03.253926] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:57.876 passed 00:09:57.876 Test: test_read_header ...[2024-11-29 11:54:03.254013] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:57.876 [2024-11-29 11:54:03.254094] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:09:57.876 [2024-11-29 11:54:03.254713] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:09:57.876 [2024-11-29 11:54:03.254825] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:09:57.876 [2024-11-29 11:54:03.254877] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:09:57.876 [2024-11-29 11:54:03.254924] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:09:57.876 [2024-11-29 11:54:03.255271] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:09:57.876 passed 00:09:57.876 Test: test_read_partitions ...[2024-11-29 11:54:03.255357] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:09:57.876 [2024-11-29 11:54:03.255698] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:09:57.876 [2024-11-29 11:54:03.255761] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:09:57.876 [2024-11-29 11:54:03.255801] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:09:57.876 [2024-11-29 11:54:03.256452] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:09:57.876 passed 00:09:57.876 00:09:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.876 suites 1 1 n/a 0 0 00:09:57.876 tests 5 5 5 0 0 00:09:57.876 asserts 33 33 33 0 n/a 00:09:57.876 00:09:57.876 Elapsed time = 0.008 seconds 00:09:57.876 11:54:03 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:09:57.876 00:09:57.876 00:09:57.876 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.876 http://cunit.sourceforge.net/ 00:09:57.876 00:09:57.876 00:09:57.876 Suite: bdev_part 00:09:57.876 Test: part_test ...[2024-11-29 11:54:03.287838] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:09:57.876 passed 00:09:57.876 Test: part_free_test ...passed 00:09:57.876 Test: part_get_io_channel_test ...passed 00:09:57.876 Test: part_construct_ext ...passed 00:09:57.876 00:09:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.876 suites 1 1 n/a 0 0 00:09:57.876 tests 4 4 4 0 0 00:09:57.876 asserts 48 48 48 0 n/a 00:09:57.876 00:09:57.876 Elapsed time = 0.048 seconds 00:09:57.876 11:54:03 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:09:57.876 00:09:57.876 00:09:57.876 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.876 http://cunit.sourceforge.net/ 00:09:57.876 00:09:57.876 00:09:57.876 Suite: scsi_nvme_suite 00:09:57.876 Test: scsi_nvme_translate_test ...passed 00:09:57.876 00:09:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.876 suites 1 1 n/a 0 0 00:09:57.876 tests 1 1 1 0 0 00:09:57.876 asserts 104 104 104 0 n/a 00:09:57.876 00:09:57.876 Elapsed time = 0.000 seconds 00:09:58.136 11:54:03 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:09:58.136 00:09:58.136 00:09:58.136 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.136 http://cunit.sourceforge.net/ 00:09:58.136 00:09:58.136 00:09:58.136 Suite: lvol 00:09:58.136 Test: ut_lvs_init ...[2024-11-29 11:54:03.398987] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:09:58.136 [2024-11-29 11:54:03.399414] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:09:58.136 passed 00:09:58.136 Test: ut_lvol_init ...passed 00:09:58.136 Test: ut_lvol_snapshot ...passed 00:09:58.136 Test: ut_lvol_clone ...passed 00:09:58.136 Test: ut_lvs_destroy ...passed 00:09:58.136 Test: ut_lvs_unload ...passed 00:09:58.136 Test: ut_lvol_resize ...[2024-11-29 11:54:03.400776] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:09:58.136 passed 00:09:58.136 Test: ut_lvol_set_read_only ...passed 00:09:58.136 Test: ut_lvol_hotremove ...passed 00:09:58.136 Test: ut_vbdev_lvol_get_io_channel ...passed 00:09:58.136 Test: ut_vbdev_lvol_io_type_supported ...passed 00:09:58.136 Test: ut_lvol_read_write ...passed 00:09:58.136 Test: ut_vbdev_lvol_submit_request ...passed 00:09:58.136 Test: ut_lvol_examine_config ...passed 00:09:58.136 Test: ut_lvol_examine_disk ...[2024-11-29 11:54:03.401498] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:09:58.136 passed 00:09:58.136 Test: ut_lvol_rename ...[2024-11-29 11:54:03.402436] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:09:58.136 [2024-11-29 11:54:03.402558] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:09:58.136 passed 00:09:58.136 Test: ut_bdev_finish ...passed 00:09:58.136 Test: ut_lvs_rename ...passed 00:09:58.136 Test: ut_lvol_seek ...passed 00:09:58.136 Test: ut_esnap_dev_create ...[2024-11-29 11:54:03.403228] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:09:58.136 [2024-11-29 11:54:03.403300] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:09:58.136 [2024-11-29 11:54:03.403335] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:09:58.136 [2024-11-29 11:54:03.403389] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:09:58.136 passed 00:09:58.136 Test: ut_lvol_esnap_clone_bad_args ...passed 00:09:58.136 00:09:58.136 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.136 suites 1 1 n/a 0 0 00:09:58.136 tests 21 21 21 0 0 00:09:58.136 asserts 712 712 712 0 n/a 00:09:58.136 00:09:58.136 Elapsed time = 0.005 seconds 00:09:58.136 [2024-11-29 11:54:03.403540] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:09:58.136 [2024-11-29 11:54:03.403581] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:09:58.136 11:54:03 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:09:58.136 00:09:58.136 00:09:58.136 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.136 http://cunit.sourceforge.net/ 00:09:58.136 00:09:58.136 00:09:58.136 Suite: zone_block 00:09:58.136 Test: test_zone_block_create ...passed 00:09:58.136 Test: test_zone_block_create_invalid ...[2024-11-29 11:54:03.465183] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:09:58.136 [2024-11-29 11:54:03.465904] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-29 11:54:03.466546] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:09:58.136 [2024-11-29 11:54:03.466699] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-29 11:54:03.467235] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:09:58.136 [2024-11-29 11:54:03.467304] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-29 11:54:03.467702] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:09:58.136 [2024-11-29 11:54:03.467794] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:09:58.136 Test: test_get_zone_info ...[2024-11-29 11:54:03.468913] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.469020] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.469558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 passed 00:09:58.136 Test: test_supported_io_types ...passed 00:09:58.136 Test: test_reset_zone ...[2024-11-29 11:54:03.471144] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.471238] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 passed 00:09:58.136 Test: test_open_zone ...[2024-11-29 11:54:03.472388] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.473340] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.473443] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 passed 00:09:58.136 Test: test_zone_write ...[2024-11-29 11:54:03.474622] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:09:58.136 [2024-11-29 11:54:03.474708] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.474827] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:09:58.136 [2024-11-29 11:54:03.475185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.136 [2024-11-29 11:54:03.481262] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:09:58.137 [2024-11-29 11:54:03.481340] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.481855] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:09:58.137 [2024-11-29 11:54:03.481938] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.487721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:09:58.137 [2024-11-29 11:54:03.487823] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 passed 00:09:58.137 Test: test_zone_read ...[2024-11-29 11:54:03.488843] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:09:58.137 [2024-11-29 11:54:03.488908] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.489391] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:09:58.137 [2024-11-29 11:54:03.489455] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.490147] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:09:58.137 [2024-11-29 11:54:03.490208] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 passed 00:09:58.137 Test: test_close_zone ...[2024-11-29 11:54:03.491080] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.491231] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.491866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.491950] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 passed 00:09:58.137 Test: test_finish_zone ...[2024-11-29 11:54:03.493174] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.493263] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 passed 00:09:58.137 Test: test_append_zone ...[2024-11-29 11:54:03.494264] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:09:58.137 [2024-11-29 11:54:03.494364] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.494825] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:09:58.137 [2024-11-29 11:54:03.494892] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 [2024-11-29 11:54:03.507332] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:09:58.137 [2024-11-29 11:54:03.507428] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:58.137 passed 00:09:58.137 00:09:58.137 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.137 suites 1 1 n/a 0 0 00:09:58.137 tests 11 11 11 0 0 00:09:58.137 asserts 3437 3437 3437 0 n/a 00:09:58.137 00:09:58.137 Elapsed time = 0.045 seconds 00:09:58.137 11:54:03 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:09:58.137 00:09:58.137 00:09:58.137 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.137 http://cunit.sourceforge.net/ 00:09:58.137 00:09:58.137 00:09:58.137 Suite: bdev 00:09:58.137 Test: basic ...[2024-11-29 11:54:03.604185] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5611103f7401): Operation not permitted (rc=-1) 00:09:58.137 [2024-11-29 11:54:03.604510] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x5611103f73c0): Operation not permitted (rc=-1) 00:09:58.137 [2024-11-29 11:54:03.604569] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5611103f7401): Operation not permitted (rc=-1) 00:09:58.137 passed 00:09:58.396 Test: unregister_and_close ...passed 00:09:58.396 Test: unregister_and_close_different_threads ...passed 00:09:58.396 Test: basic_qos ...passed 00:09:58.396 Test: put_channel_during_reset ...passed 00:09:58.396 Test: aborted_reset ...passed 00:09:58.702 Test: aborted_reset_no_outstanding_io ...passed 00:09:58.702 Test: io_during_reset ...passed 00:09:58.702 Test: reset_completions ...passed 00:09:58.702 Test: io_during_qos_queue ...passed 00:09:58.702 Test: io_during_qos_reset ...passed 00:09:58.702 Test: enomem ...passed 00:09:58.961 Test: enomem_multi_bdev ...passed 00:09:58.961 Test: enomem_multi_bdev_unregister ...passed 00:09:58.961 Test: enomem_multi_io_target ...passed 00:09:58.961 Test: qos_dynamic_enable ...passed 00:09:58.961 Test: bdev_histograms_mt ...passed 00:09:58.961 Test: bdev_set_io_timeout_mt ...passed 00:09:58.961 Test: lock_lba_range_then_submit_io ...[2024-11-29 11:54:04.445851] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:09:58.961 [2024-11-29 11:54:04.467591] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x5611103f7380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:09:59.219 passed 00:09:59.219 Test: unregister_during_reset ...passed 00:09:59.219 Test: event_notify_and_close ...passed 00:09:59.219 Test: unregister_and_qos_poller ...passed 00:09:59.219 Suite: bdev_wrong_thread 00:09:59.219 Test: spdk_bdev_register_wt ...[2024-11-29 11:54:04.636527] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:09:59.219 passed 00:09:59.219 Test: spdk_bdev_examine_wt ...[2024-11-29 11:54:04.637139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:09:59.219 passed 00:09:59.219 00:09:59.219 Run Summary: Type Total Ran Passed Failed Inactive 00:09:59.219 suites 2 2 n/a 0 0 00:09:59.219 tests 24 24 24 0 0 00:09:59.219 asserts 621 621 621 0 n/a 00:09:59.219 00:09:59.219 Elapsed time = 1.057 seconds 00:09:59.219 ************************************ 00:09:59.219 END TEST unittest_bdev 00:09:59.219 ************************************ 00:09:59.219 00:09:59.219 real 0m4.492s 00:09:59.219 user 0m1.971s 00:09:59.219 sys 0m2.517s 00:09:59.219 11:54:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:59.219 11:54:04 -- common/autotest_common.sh@10 -- # set +x 00:09:59.219 11:54:04 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:59.219 11:54:04 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:59.219 11:54:04 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:59.219 11:54:04 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:59.219 11:54:04 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:09:59.219 11:54:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:59.219 11:54:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.219 11:54:04 -- common/autotest_common.sh@10 -- # set +x 00:09:59.219 ************************************ 00:09:59.219 START TEST unittest_bdev_raid5f 00:09:59.219 ************************************ 00:09:59.219 11:54:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:09:59.478 00:09:59.478 00:09:59.478 CUnit - A unit testing framework for C - Version 2.1-3 00:09:59.478 http://cunit.sourceforge.net/ 00:09:59.478 00:09:59.478 00:09:59.478 Suite: raid5f 00:09:59.478 Test: test_raid5f_start ...passed 00:09:59.736 Test: test_raid5f_submit_read_request ...passed 00:09:59.995 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:10:04.182 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:10:22.268 Test: test_raid5f_chunk_write_error ...passed 00:10:30.378 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:10:32.287 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:11:04.359 Test: test_raid5f_submit_read_request_degraded ...passed 00:11:04.359 00:11:04.359 Run Summary: Type Total Ran Passed Failed Inactive 00:11:04.359 suites 1 1 n/a 0 0 00:11:04.359 tests 8 8 8 0 0 00:11:04.359 asserts 351864 351864 351864 0 n/a 00:11:04.359 00:11:04.359 Elapsed time = 61.508 seconds 00:11:04.359 00:11:04.359 real 1m1.614s 00:11:04.359 user 0m58.699s 00:11:04.359 sys 0m2.897s 00:11:04.359 11:55:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.359 ************************************ 00:11:04.359 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:11:04.359 END TEST unittest_bdev_raid5f 00:11:04.359 ************************************ 00:11:04.359 11:55:06 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:11:04.359 11:55:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:04.359 11:55:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.359 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:11:04.359 ************************************ 00:11:04.359 START TEST unittest_blob_blobfs 00:11:04.359 ************************************ 00:11:04.359 11:55:06 -- common/autotest_common.sh@1114 -- # unittest_blob 00:11:04.359 11:55:06 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:11:04.359 11:55:06 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:11:04.359 00:11:04.359 00:11:04.359 CUnit - A unit testing framework for C - Version 2.1-3 00:11:04.359 http://cunit.sourceforge.net/ 00:11:04.359 00:11:04.359 00:11:04.359 Suite: blob_nocopy_noextent 00:11:04.359 Test: blob_init ...[2024-11-29 11:55:06.421233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:11:04.359 passed 00:11:04.359 Test: blob_thin_provision ...passed 00:11:04.359 Test: blob_read_only ...passed 00:11:04.359 Test: bs_load ...[2024-11-29 11:55:06.528707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:11:04.359 passed 00:11:04.359 Test: bs_load_custom_cluster_size ...passed 00:11:04.359 Test: bs_load_after_failed_grow ...passed 00:11:04.359 Test: bs_cluster_sz ...[2024-11-29 11:55:06.565989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:11:04.359 [2024-11-29 11:55:06.566627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:11:04.359 [2024-11-29 11:55:06.566938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:11:04.359 passed 00:11:04.359 Test: bs_resize_md ...passed 00:11:04.359 Test: bs_destroy ...passed 00:11:04.359 Test: bs_type ...passed 00:11:04.359 Test: bs_super_block ...passed 00:11:04.359 Test: bs_test_recover_cluster_count ...passed 00:11:04.359 Test: bs_grow_live ...passed 00:11:04.359 Test: bs_grow_live_no_space ...passed 00:11:04.359 Test: bs_test_grow ...passed 00:11:04.359 Test: blob_serialize_test ...passed 00:11:04.359 Test: super_block_crc ...passed 00:11:04.359 Test: blob_thin_prov_write_count_io ...passed 00:11:04.359 Test: bs_load_iter_test ...passed 00:11:04.359 Test: blob_relations ...[2024-11-29 11:55:06.776347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:04.359 [2024-11-29 11:55:06.776860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:06.778244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:04.359 [2024-11-29 11:55:06.778513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 passed 00:11:04.359 Test: blob_relations2 ...[2024-11-29 11:55:06.799255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:04.359 [2024-11-29 11:55:06.799643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:06.799857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:04.359 [2024-11-29 11:55:06.800036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:06.801989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:04.359 [2024-11-29 11:55:06.802215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:06.802941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:04.359 [2024-11-29 11:55:06.803153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 passed 00:11:04.359 Test: blob_relations3 ...passed 00:11:04.359 Test: blobstore_clean_power_failure ...passed 00:11:04.359 Test: blob_delete_snapshot_power_failure ...[2024-11-29 11:55:07.011302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:11:04.359 [2024-11-29 11:55:07.028497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:04.359 [2024-11-29 11:55:07.029078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:04.359 [2024-11-29 11:55:07.029342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:07.046327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:11:04.359 [2024-11-29 11:55:07.046895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:11:04.359 [2024-11-29 11:55:07.047009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:04.359 [2024-11-29 11:55:07.047250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:07.063732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:11:04.359 [2024-11-29 11:55:07.064233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:07.080303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:11:04.359 [2024-11-29 11:55:07.080794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 [2024-11-29 11:55:07.096988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:11:04.359 [2024-11-29 11:55:07.097407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.359 passed 00:11:04.359 Test: blob_create_snapshot_power_failure ...[2024-11-29 11:55:07.143955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:04.359 [2024-11-29 11:55:07.174670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:11:04.359 [2024-11-29 11:55:07.190683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:11:04.359 passed 00:11:04.359 Test: blob_io_unit ...passed 00:11:04.359 Test: blob_io_unit_compatibility ...passed 00:11:04.360 Test: blob_ext_md_pages ...passed 00:11:04.360 Test: blob_esnap_io_4096_4096 ...passed 00:11:04.360 Test: blob_esnap_io_512_512 ...passed 00:11:04.360 Test: blob_esnap_io_4096_512 ...passed 00:11:04.360 Test: blob_esnap_io_512_4096 ...passed 00:11:04.360 Suite: blob_bs_nocopy_noextent 00:11:04.360 Test: blob_open ...passed 00:11:04.360 Test: blob_create ...[2024-11-29 11:55:07.513874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:11:04.360 passed 00:11:04.360 Test: blob_create_loop ...passed 00:11:04.360 Test: blob_create_fail ...[2024-11-29 11:55:07.635528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:04.360 passed 00:11:04.360 Test: blob_create_internal ...passed 00:11:04.360 Test: blob_create_zero_extent ...passed 00:11:04.360 Test: blob_snapshot ...passed 00:11:04.360 Test: blob_clone ...passed 00:11:04.360 Test: blob_inflate ...[2024-11-29 11:55:07.886508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:11:04.360 passed 00:11:04.360 Test: blob_delete ...passed 00:11:04.360 Test: blob_resize_test ...[2024-11-29 11:55:07.972741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:11:04.360 passed 00:11:04.360 Test: channel_ops ...passed 00:11:04.360 Test: blob_super ...passed 00:11:04.360 Test: blob_rw_verify_iov ...passed 00:11:04.360 Test: blob_unmap ...passed 00:11:04.360 Test: blob_iter ...passed 00:11:04.360 Test: blob_parse_md ...passed 00:11:04.360 Test: bs_load_pending_removal ...passed 00:11:04.360 Test: bs_unload ...[2024-11-29 11:55:08.319028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:11:04.360 passed 00:11:04.360 Test: bs_usable_clusters ...passed 00:11:04.360 Test: blob_crc ...[2024-11-29 11:55:08.407329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:04.360 [2024-11-29 11:55:08.407830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:04.360 passed 00:11:04.360 Test: blob_flags ...passed 00:11:04.360 Test: bs_version ...passed 00:11:04.360 Test: blob_set_xattrs_test ...[2024-11-29 11:55:08.543428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:04.360 [2024-11-29 11:55:08.543790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:04.360 passed 00:11:04.360 Test: blob_thin_prov_alloc ...passed 00:11:04.360 Test: blob_insert_cluster_msg_test ...passed 00:11:04.360 Test: blob_thin_prov_rw ...passed 00:11:04.360 Test: blob_thin_prov_rle ...passed 00:11:04.360 Test: blob_thin_prov_rw_iov ...passed 00:11:04.360 Test: blob_snapshot_rw ...passed 00:11:04.360 Test: blob_snapshot_rw_iov ...passed 00:11:04.360 Test: blob_inflate_rw ...passed 00:11:04.360 Test: blob_snapshot_freeze_io ...passed 00:11:04.360 Test: blob_operation_split_rw ...passed 00:11:04.360 Test: blob_operation_split_rw_iov ...passed 00:11:04.360 Test: blob_simultaneous_operations ...[2024-11-29 11:55:09.688897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:04.360 [2024-11-29 11:55:09.689329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.360 [2024-11-29 11:55:09.690680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:04.360 [2024-11-29 11:55:09.690858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.360 [2024-11-29 11:55:09.702888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:04.360 [2024-11-29 11:55:09.703271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.360 [2024-11-29 11:55:09.703481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:04.360 [2024-11-29 11:55:09.703626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:04.360 passed 00:11:04.360 Test: blob_persist_test ...passed 00:11:04.360 Test: blob_decouple_snapshot ...passed 00:11:04.618 Test: blob_seek_io_unit ...passed 00:11:04.618 Test: blob_nested_freezes ...passed 00:11:04.618 Suite: blob_blob_nocopy_noextent 00:11:04.618 Test: blob_write ...passed 00:11:04.618 Test: blob_read ...passed 00:11:04.618 Test: blob_rw_verify ...passed 00:11:04.618 Test: blob_rw_verify_iov_nomem ...passed 00:11:04.875 Test: blob_rw_iov_read_only ...passed 00:11:04.875 Test: blob_xattr ...passed 00:11:04.875 Test: blob_dirty_shutdown ...passed 00:11:04.875 Test: blob_is_degraded ...passed 00:11:04.875 Suite: blob_esnap_bs_nocopy_noextent 00:11:04.875 Test: blob_esnap_create ...passed 00:11:05.133 Test: blob_esnap_thread_add_remove ...passed 00:11:05.133 Test: blob_esnap_clone_snapshot ...passed 00:11:05.133 Test: blob_esnap_clone_inflate ...passed 00:11:05.133 Test: blob_esnap_clone_decouple ...passed 00:11:05.133 Test: blob_esnap_clone_reload ...passed 00:11:05.133 Test: blob_esnap_hotplug ...passed 00:11:05.133 Suite: blob_nocopy_extent 00:11:05.133 Test: blob_init ...[2024-11-29 11:55:10.609054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:11:05.133 passed 00:11:05.133 Test: blob_thin_provision ...passed 00:11:05.405 Test: blob_read_only ...passed 00:11:05.405 Test: bs_load ...[2024-11-29 11:55:10.669218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:11:05.405 passed 00:11:05.405 Test: bs_load_custom_cluster_size ...passed 00:11:05.405 Test: bs_load_after_failed_grow ...passed 00:11:05.405 Test: bs_cluster_sz ...[2024-11-29 11:55:10.702574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:11:05.405 [2024-11-29 11:55:10.702989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:11:05.405 [2024-11-29 11:55:10.703177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:11:05.405 passed 00:11:05.405 Test: bs_resize_md ...passed 00:11:05.405 Test: bs_destroy ...passed 00:11:05.405 Test: bs_type ...passed 00:11:05.405 Test: bs_super_block ...passed 00:11:05.405 Test: bs_test_recover_cluster_count ...passed 00:11:05.405 Test: bs_grow_live ...passed 00:11:05.405 Test: bs_grow_live_no_space ...passed 00:11:05.405 Test: bs_test_grow ...passed 00:11:05.405 Test: blob_serialize_test ...passed 00:11:05.405 Test: super_block_crc ...passed 00:11:05.405 Test: blob_thin_prov_write_count_io ...passed 00:11:05.405 Test: bs_load_iter_test ...passed 00:11:05.405 Test: blob_relations ...[2024-11-29 11:55:10.896818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:05.405 [2024-11-29 11:55:10.897245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.405 [2024-11-29 11:55:10.898303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:05.405 [2024-11-29 11:55:10.898536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.405 passed 00:11:05.688 Test: blob_relations2 ...[2024-11-29 11:55:10.916524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:05.688 [2024-11-29 11:55:10.916881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.688 [2024-11-29 11:55:10.916961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:05.688 [2024-11-29 11:55:10.917220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.688 [2024-11-29 11:55:10.918905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:05.688 [2024-11-29 11:55:10.919099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.688 [2024-11-29 11:55:10.919679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:05.688 [2024-11-29 11:55:10.919849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.688 passed 00:11:05.688 Test: blob_relations3 ...passed 00:11:05.688 Test: blobstore_clean_power_failure ...passed 00:11:05.688 Test: blob_delete_snapshot_power_failure ...[2024-11-29 11:55:11.130248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:11:05.688 [2024-11-29 11:55:11.145832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:11:05.688 [2024-11-29 11:55:11.162423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:05.688 [2024-11-29 11:55:11.162830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:05.688 [2024-11-29 11:55:11.162920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.688 [2024-11-29 11:55:11.179159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:11:05.688 [2024-11-29 11:55:11.179542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:11:05.688 [2024-11-29 11:55:11.179629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:05.688 [2024-11-29 11:55:11.179807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.688 [2024-11-29 11:55:11.195277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:11:05.688 [2024-11-29 11:55:11.195705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:11:05.688 [2024-11-29 11:55:11.195785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:05.688 [2024-11-29 11:55:11.196016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.946 [2024-11-29 11:55:11.212071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:11:05.946 [2024-11-29 11:55:11.212560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.946 [2024-11-29 11:55:11.228571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:11:05.946 [2024-11-29 11:55:11.229011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.946 [2024-11-29 11:55:11.245655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:11:05.946 [2024-11-29 11:55:11.246146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:05.946 passed 00:11:05.946 Test: blob_create_snapshot_power_failure ...[2024-11-29 11:55:11.296725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:05.946 [2024-11-29 11:55:11.312650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:11:05.946 [2024-11-29 11:55:11.343240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:11:05.946 [2024-11-29 11:55:11.359175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:11:05.946 passed 00:11:05.946 Test: blob_io_unit ...passed 00:11:05.946 Test: blob_io_unit_compatibility ...passed 00:11:06.205 Test: blob_ext_md_pages ...passed 00:11:06.205 Test: blob_esnap_io_4096_4096 ...passed 00:11:06.205 Test: blob_esnap_io_512_512 ...passed 00:11:06.205 Test: blob_esnap_io_4096_512 ...passed 00:11:06.205 Test: blob_esnap_io_512_4096 ...passed 00:11:06.205 Suite: blob_bs_nocopy_extent 00:11:06.205 Test: blob_open ...passed 00:11:06.205 Test: blob_create ...[2024-11-29 11:55:11.669446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:11:06.205 passed 00:11:06.463 Test: blob_create_loop ...passed 00:11:06.463 Test: blob_create_fail ...[2024-11-29 11:55:11.795758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:06.463 passed 00:11:06.463 Test: blob_create_internal ...passed 00:11:06.463 Test: blob_create_zero_extent ...passed 00:11:06.463 Test: blob_snapshot ...passed 00:11:06.720 Test: blob_clone ...passed 00:11:06.720 Test: blob_inflate ...[2024-11-29 11:55:12.032231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:11:06.721 passed 00:11:06.721 Test: blob_delete ...passed 00:11:06.721 Test: blob_resize_test ...[2024-11-29 11:55:12.122582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:11:06.721 passed 00:11:06.721 Test: channel_ops ...passed 00:11:06.721 Test: blob_super ...passed 00:11:06.981 Test: blob_rw_verify_iov ...passed 00:11:06.981 Test: blob_unmap ...passed 00:11:06.981 Test: blob_iter ...passed 00:11:06.981 Test: blob_parse_md ...passed 00:11:06.981 Test: bs_load_pending_removal ...passed 00:11:06.981 Test: bs_unload ...[2024-11-29 11:55:12.473354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:11:06.981 passed 00:11:07.240 Test: bs_usable_clusters ...passed 00:11:07.240 Test: blob_crc ...[2024-11-29 11:55:12.560498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:07.240 [2024-11-29 11:55:12.560902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:07.240 passed 00:11:07.240 Test: blob_flags ...passed 00:11:07.240 Test: bs_version ...passed 00:11:07.240 Test: blob_set_xattrs_test ...[2024-11-29 11:55:12.694073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:07.240 [2024-11-29 11:55:12.694524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:07.240 passed 00:11:07.498 Test: blob_thin_prov_alloc ...passed 00:11:07.498 Test: blob_insert_cluster_msg_test ...passed 00:11:07.498 Test: blob_thin_prov_rw ...passed 00:11:07.498 Test: blob_thin_prov_rle ...passed 00:11:07.757 Test: blob_thin_prov_rw_iov ...passed 00:11:07.757 Test: blob_snapshot_rw ...passed 00:11:07.757 Test: blob_snapshot_rw_iov ...passed 00:11:08.016 Test: blob_inflate_rw ...passed 00:11:08.016 Test: blob_snapshot_freeze_io ...passed 00:11:08.273 Test: blob_operation_split_rw ...passed 00:11:08.531 Test: blob_operation_split_rw_iov ...passed 00:11:08.531 Test: blob_simultaneous_operations ...[2024-11-29 11:55:13.828297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:08.531 [2024-11-29 11:55:13.828730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:08.531 [2024-11-29 11:55:13.830124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:08.531 [2024-11-29 11:55:13.830314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:08.531 [2024-11-29 11:55:13.844677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:08.531 [2024-11-29 11:55:13.845147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:08.531 [2024-11-29 11:55:13.845498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:08.531 [2024-11-29 11:55:13.845722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:08.531 passed 00:11:08.531 Test: blob_persist_test ...passed 00:11:08.531 Test: blob_decouple_snapshot ...passed 00:11:08.788 Test: blob_seek_io_unit ...passed 00:11:08.788 Test: blob_nested_freezes ...passed 00:11:08.788 Suite: blob_blob_nocopy_extent 00:11:08.788 Test: blob_write ...passed 00:11:08.788 Test: blob_read ...passed 00:11:08.788 Test: blob_rw_verify ...passed 00:11:08.788 Test: blob_rw_verify_iov_nomem ...passed 00:11:09.046 Test: blob_rw_iov_read_only ...passed 00:11:09.047 Test: blob_xattr ...passed 00:11:09.047 Test: blob_dirty_shutdown ...passed 00:11:09.047 Test: blob_is_degraded ...passed 00:11:09.047 Suite: blob_esnap_bs_nocopy_extent 00:11:09.047 Test: blob_esnap_create ...passed 00:11:09.047 Test: blob_esnap_thread_add_remove ...passed 00:11:09.304 Test: blob_esnap_clone_snapshot ...passed 00:11:09.304 Test: blob_esnap_clone_inflate ...passed 00:11:09.304 Test: blob_esnap_clone_decouple ...passed 00:11:09.304 Test: blob_esnap_clone_reload ...passed 00:11:09.304 Test: blob_esnap_hotplug ...passed 00:11:09.304 Suite: blob_copy_noextent 00:11:09.304 Test: blob_init ...[2024-11-29 11:55:14.762707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:11:09.304 passed 00:11:09.304 Test: blob_thin_provision ...passed 00:11:09.304 Test: blob_read_only ...passed 00:11:09.563 Test: bs_load ...[2024-11-29 11:55:14.822069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:11:09.563 passed 00:11:09.563 Test: bs_load_custom_cluster_size ...passed 00:11:09.563 Test: bs_load_after_failed_grow ...passed 00:11:09.563 Test: bs_cluster_sz ...[2024-11-29 11:55:14.853428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:11:09.563 [2024-11-29 11:55:14.853779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:11:09.563 [2024-11-29 11:55:14.853951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:11:09.563 passed 00:11:09.563 Test: bs_resize_md ...passed 00:11:09.563 Test: bs_destroy ...passed 00:11:09.563 Test: bs_type ...passed 00:11:09.563 Test: bs_super_block ...passed 00:11:09.563 Test: bs_test_recover_cluster_count ...passed 00:11:09.563 Test: bs_grow_live ...passed 00:11:09.563 Test: bs_grow_live_no_space ...passed 00:11:09.563 Test: bs_test_grow ...passed 00:11:09.563 Test: blob_serialize_test ...passed 00:11:09.563 Test: super_block_crc ...passed 00:11:09.563 Test: blob_thin_prov_write_count_io ...passed 00:11:09.563 Test: bs_load_iter_test ...passed 00:11:09.563 Test: blob_relations ...[2024-11-29 11:55:15.042680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:09.563 [2024-11-29 11:55:15.043119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.563 [2024-11-29 11:55:15.043825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:09.563 [2024-11-29 11:55:15.043981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.563 passed 00:11:09.563 Test: blob_relations2 ...[2024-11-29 11:55:15.061279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:09.563 [2024-11-29 11:55:15.061711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.563 [2024-11-29 11:55:15.061787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:09.563 [2024-11-29 11:55:15.061912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.563 [2024-11-29 11:55:15.062957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:09.563 [2024-11-29 11:55:15.063133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.563 [2024-11-29 11:55:15.063563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:09.563 [2024-11-29 11:55:15.063720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.563 passed 00:11:09.819 Test: blob_relations3 ...passed 00:11:09.819 Test: blobstore_clean_power_failure ...passed 00:11:09.819 Test: blob_delete_snapshot_power_failure ...[2024-11-29 11:55:15.263473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:11:09.819 [2024-11-29 11:55:15.278611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:09.819 [2024-11-29 11:55:15.278843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:09.819 [2024-11-29 11:55:15.278999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.819 [2024-11-29 11:55:15.294103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:11:09.819 [2024-11-29 11:55:15.294513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:11:09.819 [2024-11-29 11:55:15.294595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:09.819 [2024-11-29 11:55:15.294776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.819 [2024-11-29 11:55:15.309744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:11:09.819 [2024-11-29 11:55:15.310254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:09.819 [2024-11-29 11:55:15.325087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:11:09.819 [2024-11-29 11:55:15.325434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:10.077 [2024-11-29 11:55:15.340444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:11:10.077 [2024-11-29 11:55:15.340796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:10.077 passed 00:11:10.077 Test: blob_create_snapshot_power_failure ...[2024-11-29 11:55:15.385621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:10.077 [2024-11-29 11:55:15.414552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:11:10.077 [2024-11-29 11:55:15.429421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:11:10.077 passed 00:11:10.077 Test: blob_io_unit ...passed 00:11:10.077 Test: blob_io_unit_compatibility ...passed 00:11:10.077 Test: blob_ext_md_pages ...passed 00:11:10.077 Test: blob_esnap_io_4096_4096 ...passed 00:11:10.336 Test: blob_esnap_io_512_512 ...passed 00:11:10.336 Test: blob_esnap_io_4096_512 ...passed 00:11:10.336 Test: blob_esnap_io_512_4096 ...passed 00:11:10.336 Suite: blob_bs_copy_noextent 00:11:10.336 Test: blob_open ...passed 00:11:10.336 Test: blob_create ...[2024-11-29 11:55:15.734619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:11:10.336 passed 00:11:10.336 Test: blob_create_loop ...passed 00:11:10.594 Test: blob_create_fail ...[2024-11-29 11:55:15.847131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:10.594 passed 00:11:10.594 Test: blob_create_internal ...passed 00:11:10.594 Test: blob_create_zero_extent ...passed 00:11:10.594 Test: blob_snapshot ...passed 00:11:10.594 Test: blob_clone ...passed 00:11:10.594 Test: blob_inflate ...[2024-11-29 11:55:16.066935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:11:10.594 passed 00:11:10.853 Test: blob_delete ...passed 00:11:10.853 Test: blob_resize_test ...[2024-11-29 11:55:16.153222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:11:10.853 passed 00:11:10.853 Test: channel_ops ...passed 00:11:10.853 Test: blob_super ...passed 00:11:10.853 Test: blob_rw_verify_iov ...passed 00:11:10.853 Test: blob_unmap ...passed 00:11:11.112 Test: blob_iter ...passed 00:11:11.112 Test: blob_parse_md ...passed 00:11:11.112 Test: bs_load_pending_removal ...passed 00:11:11.112 Test: bs_unload ...[2024-11-29 11:55:16.484725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:11:11.112 passed 00:11:11.112 Test: bs_usable_clusters ...passed 00:11:11.112 Test: blob_crc ...[2024-11-29 11:55:16.568364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:11.112 [2024-11-29 11:55:16.568727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:11.112 passed 00:11:11.370 Test: blob_flags ...passed 00:11:11.370 Test: bs_version ...passed 00:11:11.370 Test: blob_set_xattrs_test ...[2024-11-29 11:55:16.701221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:11.370 [2024-11-29 11:55:16.701660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:11.370 passed 00:11:11.370 Test: blob_thin_prov_alloc ...passed 00:11:11.629 Test: blob_insert_cluster_msg_test ...passed 00:11:11.629 Test: blob_thin_prov_rw ...passed 00:11:11.629 Test: blob_thin_prov_rle ...passed 00:11:11.629 Test: blob_thin_prov_rw_iov ...passed 00:11:11.629 Test: blob_snapshot_rw ...passed 00:11:11.629 Test: blob_snapshot_rw_iov ...passed 00:11:11.888 Test: blob_inflate_rw ...passed 00:11:12.146 Test: blob_snapshot_freeze_io ...passed 00:11:12.146 Test: blob_operation_split_rw ...passed 00:11:12.405 Test: blob_operation_split_rw_iov ...passed 00:11:12.405 Test: blob_simultaneous_operations ...[2024-11-29 11:55:17.787288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:12.405 [2024-11-29 11:55:17.787690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:12.405 [2024-11-29 11:55:17.788381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:12.405 [2024-11-29 11:55:17.788530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:12.405 [2024-11-29 11:55:17.791871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:12.405 [2024-11-29 11:55:17.792088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:12.405 [2024-11-29 11:55:17.792302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:12.405 [2024-11-29 11:55:17.792439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:12.405 passed 00:11:12.405 Test: blob_persist_test ...passed 00:11:12.405 Test: blob_decouple_snapshot ...passed 00:11:12.663 Test: blob_seek_io_unit ...passed 00:11:12.664 Test: blob_nested_freezes ...passed 00:11:12.664 Suite: blob_blob_copy_noextent 00:11:12.664 Test: blob_write ...passed 00:11:12.664 Test: blob_read ...passed 00:11:12.664 Test: blob_rw_verify ...passed 00:11:12.664 Test: blob_rw_verify_iov_nomem ...passed 00:11:12.922 Test: blob_rw_iov_read_only ...passed 00:11:12.922 Test: blob_xattr ...passed 00:11:12.922 Test: blob_dirty_shutdown ...passed 00:11:12.922 Test: blob_is_degraded ...passed 00:11:12.922 Suite: blob_esnap_bs_copy_noextent 00:11:12.922 Test: blob_esnap_create ...passed 00:11:12.922 Test: blob_esnap_thread_add_remove ...passed 00:11:13.180 Test: blob_esnap_clone_snapshot ...passed 00:11:13.180 Test: blob_esnap_clone_inflate ...passed 00:11:13.180 Test: blob_esnap_clone_decouple ...passed 00:11:13.180 Test: blob_esnap_clone_reload ...passed 00:11:13.180 Test: blob_esnap_hotplug ...passed 00:11:13.180 Suite: blob_copy_extent 00:11:13.181 Test: blob_init ...[2024-11-29 11:55:18.643270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:11:13.181 passed 00:11:13.181 Test: blob_thin_provision ...passed 00:11:13.181 Test: blob_read_only ...passed 00:11:13.493 Test: bs_load ...[2024-11-29 11:55:18.703644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:11:13.493 passed 00:11:13.493 Test: bs_load_custom_cluster_size ...passed 00:11:13.493 Test: bs_load_after_failed_grow ...passed 00:11:13.493 Test: bs_cluster_sz ...[2024-11-29 11:55:18.737205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:11:13.493 [2024-11-29 11:55:18.737497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:11:13.493 [2024-11-29 11:55:18.737709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:11:13.493 passed 00:11:13.493 Test: bs_resize_md ...passed 00:11:13.493 Test: bs_destroy ...passed 00:11:13.493 Test: bs_type ...passed 00:11:13.493 Test: bs_super_block ...passed 00:11:13.493 Test: bs_test_recover_cluster_count ...passed 00:11:13.493 Test: bs_grow_live ...passed 00:11:13.493 Test: bs_grow_live_no_space ...passed 00:11:13.493 Test: bs_test_grow ...passed 00:11:13.493 Test: blob_serialize_test ...passed 00:11:13.493 Test: super_block_crc ...passed 00:11:13.493 Test: blob_thin_prov_write_count_io ...passed 00:11:13.493 Test: bs_load_iter_test ...passed 00:11:13.493 Test: blob_relations ...[2024-11-29 11:55:18.926915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:13.493 [2024-11-29 11:55:18.927251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.493 [2024-11-29 11:55:18.928334] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:13.493 [2024-11-29 11:55:18.928518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.493 passed 00:11:13.493 Test: blob_relations2 ...[2024-11-29 11:55:18.946158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:13.493 [2024-11-29 11:55:18.946563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.493 [2024-11-29 11:55:18.946743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:13.493 [2024-11-29 11:55:18.946866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.494 [2024-11-29 11:55:18.948288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:13.494 [2024-11-29 11:55:18.948462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.494 [2024-11-29 11:55:18.948953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:11:13.494 [2024-11-29 11:55:18.949117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.494 passed 00:11:13.494 Test: blob_relations3 ...passed 00:11:13.752 Test: blobstore_clean_power_failure ...passed 00:11:13.752 Test: blob_delete_snapshot_power_failure ...[2024-11-29 11:55:19.144017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:11:13.752 [2024-11-29 11:55:19.159849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:11:13.752 [2024-11-29 11:55:19.175493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:13.752 [2024-11-29 11:55:19.175905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:13.752 [2024-11-29 11:55:19.175990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.752 [2024-11-29 11:55:19.194235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:11:13.752 [2024-11-29 11:55:19.194644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:11:13.752 [2024-11-29 11:55:19.194709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:13.752 [2024-11-29 11:55:19.194819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.752 [2024-11-29 11:55:19.209511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:11:13.752 [2024-11-29 11:55:19.209886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:11:13.752 [2024-11-29 11:55:19.209951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:11:13.752 [2024-11-29 11:55:19.210065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.752 [2024-11-29 11:55:19.224933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:11:13.752 [2024-11-29 11:55:19.225355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.752 [2024-11-29 11:55:19.239796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:11:13.752 [2024-11-29 11:55:19.240174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:13.752 [2024-11-29 11:55:19.254649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:11:13.752 [2024-11-29 11:55:19.255018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:14.012 passed 00:11:14.012 Test: blob_create_snapshot_power_failure ...[2024-11-29 11:55:19.298560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:11:14.012 [2024-11-29 11:55:19.312983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:11:14.012 [2024-11-29 11:55:19.341433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:11:14.012 [2024-11-29 11:55:19.356231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:11:14.012 passed 00:11:14.012 Test: blob_io_unit ...passed 00:11:14.012 Test: blob_io_unit_compatibility ...passed 00:11:14.012 Test: blob_ext_md_pages ...passed 00:11:14.012 Test: blob_esnap_io_4096_4096 ...passed 00:11:14.012 Test: blob_esnap_io_512_512 ...passed 00:11:14.271 Test: blob_esnap_io_4096_512 ...passed 00:11:14.271 Test: blob_esnap_io_512_4096 ...passed 00:11:14.271 Suite: blob_bs_copy_extent 00:11:14.271 Test: blob_open ...passed 00:11:14.271 Test: blob_create ...[2024-11-29 11:55:19.650956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:11:14.271 passed 00:11:14.271 Test: blob_create_loop ...passed 00:11:14.271 Test: blob_create_fail ...[2024-11-29 11:55:19.771317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:14.530 passed 00:11:14.530 Test: blob_create_internal ...passed 00:11:14.530 Test: blob_create_zero_extent ...passed 00:11:14.530 Test: blob_snapshot ...passed 00:11:14.530 Test: blob_clone ...passed 00:11:14.530 Test: blob_inflate ...[2024-11-29 11:55:19.992131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:11:14.530 passed 00:11:14.788 Test: blob_delete ...passed 00:11:14.788 Test: blob_resize_test ...[2024-11-29 11:55:20.087742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:11:14.788 passed 00:11:14.788 Test: channel_ops ...passed 00:11:14.788 Test: blob_super ...passed 00:11:14.788 Test: blob_rw_verify_iov ...passed 00:11:14.788 Test: blob_unmap ...passed 00:11:15.046 Test: blob_iter ...passed 00:11:15.046 Test: blob_parse_md ...passed 00:11:15.046 Test: bs_load_pending_removal ...passed 00:11:15.046 Test: bs_unload ...[2024-11-29 11:55:20.436278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:11:15.046 passed 00:11:15.046 Test: bs_usable_clusters ...passed 00:11:15.046 Test: blob_crc ...[2024-11-29 11:55:20.523700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:15.046 [2024-11-29 11:55:20.524092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:11:15.046 passed 00:11:15.304 Test: blob_flags ...passed 00:11:15.304 Test: bs_version ...passed 00:11:15.304 Test: blob_set_xattrs_test ...[2024-11-29 11:55:20.651788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:15.304 [2024-11-29 11:55:20.652154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:11:15.304 passed 00:11:15.304 Test: blob_thin_prov_alloc ...passed 00:11:15.562 Test: blob_insert_cluster_msg_test ...passed 00:11:15.562 Test: blob_thin_prov_rw ...passed 00:11:15.562 Test: blob_thin_prov_rle ...passed 00:11:15.562 Test: blob_thin_prov_rw_iov ...passed 00:11:15.562 Test: blob_snapshot_rw ...passed 00:11:15.562 Test: blob_snapshot_rw_iov ...passed 00:11:16.127 Test: blob_inflate_rw ...passed 00:11:16.127 Test: blob_snapshot_freeze_io ...passed 00:11:16.127 Test: blob_operation_split_rw ...passed 00:11:16.386 Test: blob_operation_split_rw_iov ...passed 00:11:16.386 Test: blob_simultaneous_operations ...[2024-11-29 11:55:21.765628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:16.386 [2024-11-29 11:55:21.766129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:16.386 [2024-11-29 11:55:21.766715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:16.386 [2024-11-29 11:55:21.766887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:16.386 [2024-11-29 11:55:21.770158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:16.386 [2024-11-29 11:55:21.770483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:16.386 [2024-11-29 11:55:21.770692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:11:16.386 [2024-11-29 11:55:21.770924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:11:16.386 passed 00:11:16.386 Test: blob_persist_test ...passed 00:11:16.643 Test: blob_decouple_snapshot ...passed 00:11:16.643 Test: blob_seek_io_unit ...passed 00:11:16.643 Test: blob_nested_freezes ...passed 00:11:16.643 Suite: blob_blob_copy_extent 00:11:16.643 Test: blob_write ...passed 00:11:16.643 Test: blob_read ...passed 00:11:16.643 Test: blob_rw_verify ...passed 00:11:16.901 Test: blob_rw_verify_iov_nomem ...passed 00:11:16.901 Test: blob_rw_iov_read_only ...passed 00:11:16.901 Test: blob_xattr ...passed 00:11:16.901 Test: blob_dirty_shutdown ...passed 00:11:16.901 Test: blob_is_degraded ...passed 00:11:16.901 Suite: blob_esnap_bs_copy_extent 00:11:17.159 Test: blob_esnap_create ...passed 00:11:17.159 Test: blob_esnap_thread_add_remove ...passed 00:11:17.159 Test: blob_esnap_clone_snapshot ...passed 00:11:17.159 Test: blob_esnap_clone_inflate ...passed 00:11:17.159 Test: blob_esnap_clone_decouple ...passed 00:11:17.159 Test: blob_esnap_clone_reload ...passed 00:11:17.418 Test: blob_esnap_hotplug ...passed 00:11:17.418 00:11:17.418 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.418 suites 16 16 n/a 0 0 00:11:17.418 tests 348 348 348 0 0 00:11:17.418 asserts 92605 92605 92605 0 n/a 00:11:17.418 00:11:17.418 Elapsed time = 16.089 seconds 00:11:17.418 11:55:22 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:11:17.418 00:11:17.418 00:11:17.418 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.418 http://cunit.sourceforge.net/ 00:11:17.418 00:11:17.418 00:11:17.418 Suite: blob_bdev 00:11:17.418 Test: create_bs_dev ...passed 00:11:17.418 Test: create_bs_dev_ro ...[2024-11-29 11:55:22.789511] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:11:17.418 passed 00:11:17.418 Test: create_bs_dev_rw ...passed 00:11:17.418 Test: claim_bs_dev ...[2024-11-29 11:55:22.790864] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:11:17.418 passed 00:11:17.418 Test: claim_bs_dev_ro ...passed 00:11:17.418 Test: deferred_destroy_refs ...passed 00:11:17.418 Test: deferred_destroy_channels ...passed 00:11:17.418 Test: deferred_destroy_threads ...passed 00:11:17.418 00:11:17.418 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.418 suites 1 1 n/a 0 0 00:11:17.418 tests 8 8 8 0 0 00:11:17.418 asserts 119 119 119 0 n/a 00:11:17.418 00:11:17.418 Elapsed time = 0.002 seconds 00:11:17.418 11:55:22 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:11:17.418 00:11:17.418 00:11:17.418 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.418 http://cunit.sourceforge.net/ 00:11:17.418 00:11:17.418 00:11:17.418 Suite: tree 00:11:17.418 Test: blobfs_tree_op_test ...passed 00:11:17.418 00:11:17.418 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.418 suites 1 1 n/a 0 0 00:11:17.418 tests 1 1 1 0 0 00:11:17.418 asserts 27 27 27 0 n/a 00:11:17.418 00:11:17.418 Elapsed time = 0.000 seconds 00:11:17.418 11:55:22 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:11:17.418 00:11:17.418 00:11:17.418 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.418 http://cunit.sourceforge.net/ 00:11:17.418 00:11:17.418 00:11:17.418 Suite: blobfs_async_ut 00:11:17.418 Test: fs_init ...passed 00:11:17.676 Test: fs_open ...passed 00:11:17.676 Test: fs_create ...passed 00:11:17.676 Test: fs_truncate ...passed 00:11:17.676 Test: fs_rename ...[2024-11-29 11:55:23.020538] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:11:17.676 passed 00:11:17.676 Test: fs_rw_async ...passed 00:11:17.676 Test: fs_writev_readv_async ...passed 00:11:17.676 Test: tree_find_buffer_ut ...passed 00:11:17.676 Test: channel_ops ...passed 00:11:17.676 Test: channel_ops_sync ...passed 00:11:17.676 00:11:17.676 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.676 suites 1 1 n/a 0 0 00:11:17.676 tests 10 10 10 0 0 00:11:17.676 asserts 292 292 292 0 n/a 00:11:17.676 00:11:17.676 Elapsed time = 0.225 seconds 00:11:17.676 11:55:23 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:11:17.676 00:11:17.676 00:11:17.676 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.676 http://cunit.sourceforge.net/ 00:11:17.676 00:11:17.676 00:11:17.676 Suite: blobfs_sync_ut 00:11:17.934 Test: cache_read_after_write ...[2024-11-29 11:55:23.212841] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:11:17.934 passed 00:11:17.934 Test: file_length ...passed 00:11:17.934 Test: append_write_to_extend_blob ...passed 00:11:17.934 Test: partial_buffer ...passed 00:11:17.934 Test: cache_write_null_buffer ...passed 00:11:17.934 Test: fs_create_sync ...passed 00:11:17.934 Test: fs_rename_sync ...passed 00:11:17.934 Test: cache_append_no_cache ...passed 00:11:17.934 Test: fs_delete_file_without_close ...passed 00:11:17.934 00:11:17.934 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.934 suites 1 1 n/a 0 0 00:11:17.934 tests 9 9 9 0 0 00:11:17.934 asserts 345 345 345 0 n/a 00:11:17.934 00:11:17.934 Elapsed time = 0.405 seconds 00:11:17.934 11:55:23 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:11:17.934 00:11:17.934 00:11:17.934 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.934 http://cunit.sourceforge.net/ 00:11:17.934 00:11:17.934 00:11:17.934 Suite: blobfs_bdev_ut 00:11:17.934 Test: spdk_blobfs_bdev_detect_test ...[2024-11-29 11:55:23.439084] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:11:17.934 passed 00:11:17.934 Test: spdk_blobfs_bdev_create_test ...[2024-11-29 11:55:23.440706] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:11:17.934 passed 00:11:17.934 Test: spdk_blobfs_bdev_mount_test ...passed 00:11:17.934 00:11:17.934 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.934 suites 1 1 n/a 0 0 00:11:17.934 tests 3 3 3 0 0 00:11:17.934 asserts 9 9 9 0 n/a 00:11:17.934 00:11:17.934 Elapsed time = 0.002 seconds 00:11:18.192 00:11:18.192 real 0m17.066s 00:11:18.192 user 0m16.286s 00:11:18.192 sys 0m0.823s 00:11:18.192 11:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.192 11:55:23 -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 ************************************ 00:11:18.192 END TEST unittest_blob_blobfs 00:11:18.192 ************************************ 00:11:18.192 11:55:23 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:11:18.192 11:55:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.192 11:55:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.192 11:55:23 -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 ************************************ 00:11:18.192 START TEST unittest_event 00:11:18.192 ************************************ 00:11:18.192 11:55:23 -- common/autotest_common.sh@1114 -- # unittest_event 00:11:18.192 11:55:23 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:11:18.192 00:11:18.192 00:11:18.192 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.192 http://cunit.sourceforge.net/ 00:11:18.193 00:11:18.193 00:11:18.193 Suite: app_suite 00:11:18.193 Test: test_spdk_app_parse_args ...app_ut [options] 00:11:18.193 options:app_ut: invalid option -- 'z' 00:11:18.193 00:11:18.193 -c, --config JSON config file (default none) 00:11:18.193 --json JSON config file (default none) 00:11:18.193 --json-ignore-init-errors 00:11:18.193 don't exit on invalid config entry 00:11:18.193 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:18.193 -g, --single-file-segments 00:11:18.193 force creating just one hugetlbfs file 00:11:18.193 -h, --help show this usage 00:11:18.193 -i, --shm-id shared memory ID (optional) 00:11:18.193 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:11:18.193 --lcores lcore to CPU mapping list. The list is in the format: 00:11:18.193 [<,lcores[@CPUs]>...] 00:11:18.193 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:18.193 Within the group, '-' is used for range separator, 00:11:18.193 ',' is used for single number separator. 00:11:18.193 '( )' can be omitted for single element group, 00:11:18.193 '@' can be omitted if cpus and lcores have the same value 00:11:18.193 -n, --mem-channels channel number of memory channels used for DPDK 00:11:18.193 -p, --main-core main (primary) core for DPDK 00:11:18.193 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:18.193 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:18.193 --disable-cpumask-locks Disable CPU core lock files. 00:11:18.193 --silence-noticelog disable notice level logging to stderr 00:11:18.193 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:18.193 -u, --no-pci disable PCI access 00:11:18.193 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:18.193 --max-delay maximum reactor delay (in microseconds) 00:11:18.193 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:18.193 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:18.193 -R, --huge-unlink unlink huge files after initialization 00:11:18.193 -v, --version print SPDK version 00:11:18.193 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:18.193 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:18.193 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:18.193 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:11:18.193 Tracepoints vary in size and can use more than one trace entry. 00:11:18.193 --rpcs-allowed comma-separated list of permitted RPCS 00:11:18.193 --env-context Opaque context for use of the env implementation 00:11:18.193 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:18.193 --no-huge run without using hugepages 00:11:18.193 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:11:18.193 -e, --tpoint-group [:] 00:11:18.193 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:11:18.193 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:11:18.193 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:11:18.193 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:11:18.193 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:11:18.193 app_ut: unrecognized option '--test-long-opt' 00:11:18.193 app_ut [options] 00:11:18.193 options: 00:11:18.193 -c, --config JSON config file (default none) 00:11:18.193 --json JSON config file (default none) 00:11:18.193 --json-ignore-init-errors 00:11:18.193 don't exit on invalid config entry 00:11:18.193 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:18.193 -g, --single-file-segments 00:11:18.193 force creating just one hugetlbfs file 00:11:18.193 -h, --help show this usage 00:11:18.193 -i, --shm-id shared memory ID (optional) 00:11:18.193 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:11:18.193 --lcores lcore to CPU mapping list. The list is in the format: 00:11:18.193 [<,lcores[@CPUs]>...] 00:11:18.193 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:18.193 Within the group, '-' is used for range separator, 00:11:18.193 ',' is used for single number separator. 00:11:18.193 '( )' can be omitted for single element group, 00:11:18.193 '@' can be omitted if cpus and lcores have the same value 00:11:18.193 -n, --mem-channels channel number of memory channels used for DPDK 00:11:18.193 -p, --main-core main (primary) core for DPDK 00:11:18.193 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:18.193 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:18.193 --disable-cpumask-locks Disable CPU core lock files. 00:11:18.193 --silence-noticelog disable notice level logging to stderr 00:11:18.193 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:18.193 -u, --no-pci disable PCI access 00:11:18.193 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:18.193 --max-delay maximum reactor delay (in microseconds) 00:11:18.193 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:18.193 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:18.193 -R, --huge-unlink unlink huge files after initialization 00:11:18.193 -v, --version print SPDK version 00:11:18.193 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:18.193 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:18.193 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:18.193 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:11:18.193 Tracepoints vary in size and can use more than one trace entry. 00:11:18.193 --rpcs-allowed comma-separated list of permitted RPCS 00:11:18.193 --env-context Opaque context for use of the env implementation 00:11:18.193 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:18.193 --no-huge run without using hugepages 00:11:18.193 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:11:18.193 -e, --tpoint-group [:] 00:11:18.193 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:11:18.193 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:11:18.193 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:11:18.193 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:11:18.193 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:11:18.193 [2024-11-29 11:55:23.511384] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:11:18.193 [2024-11-29 11:55:23.511849] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:11:18.193 app_ut [options] 00:11:18.193 options: 00:11:18.193 -c, --config JSON config file (default none) 00:11:18.193 --json JSON config file (default none) 00:11:18.193 --json-ignore-init-errors 00:11:18.193 don't exit on invalid config entry 00:11:18.193 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:18.193 -g, --single-file-segments 00:11:18.193 force creating just one hugetlbfs file 00:11:18.193 -h, --help show this usage 00:11:18.193 -i, --shm-id shared memory ID (optional) 00:11:18.193 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:11:18.193 --lcores lcore to CPU mapping list. The list is in the format: 00:11:18.193 [<,lcores[@CPUs]>...] 00:11:18.193 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:18.193 Within the group, '-' is used for range separator, 00:11:18.193 ',' is used for single number separator. 00:11:18.193 '( )' can be omitted for single element group, 00:11:18.193 '@' can be omitted if cpus and lcores have the same value 00:11:18.193 -n, --mem-channels channel number of memory channels used for DPDK 00:11:18.193 -p, --main-core main (primary) core for DPDK 00:11:18.193 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:18.193 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:18.193 --disable-cpumask-locks Disable CPU core lock files. 00:11:18.193 --silence-noticelog disable notice level logging to stderr 00:11:18.193 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:18.193 -u, --no-pci disable PCI access 00:11:18.193 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:18.193 --max-delay maximum reactor delay (in microseconds) 00:11:18.193 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:18.193 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:18.193 -R, --huge-unlink unlink huge files after initialization 00:11:18.193 -v, --version print SPDK version 00:11:18.193 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:18.193 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:18.193 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:18.194 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:11:18.194 Tracepoints vary in size and can use more than one trace entry. 00:11:18.194 --rpcs-allowed comma-separated list of permitted RPCS 00:11:18.194 --env-context Opaque context for use of the env implementation 00:11:18.194 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:18.194 --no-huge run without using hugepages 00:11:18.194 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:11:18.194 -e, --tpoint-group [:] 00:11:18.194 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:11:18.194 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:11:18.194 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:11:18.194 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:11:18.194 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:11:18.194 passed 00:11:18.194 00:11:18.194 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.194 suites 1 1 n/a 0 0 00:11:18.194 tests 1 1 1 0 0 00:11:18.194 asserts 8 8 8 0 n/a 00:11:18.194 00:11:18.194 Elapsed time = 0.002 seconds 00:11:18.194 [2024-11-29 11:55:23.513212] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:11:18.194 11:55:23 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:11:18.194 00:11:18.194 00:11:18.194 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.194 http://cunit.sourceforge.net/ 00:11:18.194 00:11:18.194 00:11:18.194 Suite: app_suite 00:11:18.194 Test: test_create_reactor ...passed 00:11:18.194 Test: test_init_reactors ...passed 00:11:18.194 Test: test_event_call ...passed 00:11:18.194 Test: test_schedule_thread ...passed 00:11:18.194 Test: test_reschedule_thread ...passed 00:11:18.194 Test: test_bind_thread ...passed 00:11:18.194 Test: test_for_each_reactor ...passed 00:11:18.194 Test: test_reactor_stats ...passed 00:11:18.194 Test: test_scheduler ...passed 00:11:18.194 Test: test_governor ...passed 00:11:18.194 00:11:18.194 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.194 suites 1 1 n/a 0 0 00:11:18.194 tests 10 10 10 0 0 00:11:18.194 asserts 344 344 344 0 n/a 00:11:18.194 00:11:18.194 Elapsed time = 0.020 seconds 00:11:18.194 00:11:18.194 real 0m0.090s 00:11:18.194 user 0m0.044s 00:11:18.194 sys 0m0.037s 00:11:18.194 ************************************ 00:11:18.194 END TEST unittest_event 00:11:18.194 ************************************ 00:11:18.194 11:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.194 11:55:23 -- common/autotest_common.sh@10 -- # set +x 00:11:18.194 11:55:23 -- unit/unittest.sh@209 -- # uname -s 00:11:18.194 11:55:23 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:11:18.194 11:55:23 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:11:18.194 11:55:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.194 11:55:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.194 11:55:23 -- common/autotest_common.sh@10 -- # set +x 00:11:18.194 ************************************ 00:11:18.194 START TEST unittest_ftl 00:11:18.194 ************************************ 00:11:18.194 11:55:23 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:11:18.194 11:55:23 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:11:18.194 00:11:18.194 00:11:18.194 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.194 http://cunit.sourceforge.net/ 00:11:18.194 00:11:18.194 00:11:18.194 Suite: ftl_band_suite 00:11:18.194 Test: test_band_block_offset_from_addr_base ...passed 00:11:18.452 Test: test_band_block_offset_from_addr_offset ...passed 00:11:18.452 Test: test_band_addr_from_block_offset ...passed 00:11:18.452 Test: test_band_set_addr ...passed 00:11:18.452 Test: test_invalidate_addr ...passed 00:11:18.452 Test: test_next_xfer_addr ...passed 00:11:18.452 00:11:18.452 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.452 suites 1 1 n/a 0 0 00:11:18.452 tests 6 6 6 0 0 00:11:18.452 asserts 30356 30356 30356 0 n/a 00:11:18.452 00:11:18.452 Elapsed time = 0.197 seconds 00:11:18.452 11:55:23 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:11:18.452 00:11:18.452 00:11:18.452 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.452 http://cunit.sourceforge.net/ 00:11:18.452 00:11:18.452 00:11:18.452 Suite: ftl_bitmap 00:11:18.452 Test: test_ftl_bitmap_create ...passed 00:11:18.452 Test: test_ftl_bitmap_get ...[2024-11-29 11:55:23.918023] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:11:18.452 [2024-11-29 11:55:23.918307] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:11:18.452 passed 00:11:18.452 Test: test_ftl_bitmap_set ...passed 00:11:18.452 Test: test_ftl_bitmap_clear ...passed 00:11:18.452 Test: test_ftl_bitmap_find_first_set ...passed 00:11:18.452 Test: test_ftl_bitmap_find_first_clear ...passed 00:11:18.452 Test: test_ftl_bitmap_count_set ...passed 00:11:18.452 00:11:18.452 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.452 suites 1 1 n/a 0 0 00:11:18.452 tests 7 7 7 0 0 00:11:18.452 asserts 137 137 137 0 n/a 00:11:18.452 00:11:18.452 Elapsed time = 0.001 seconds 00:11:18.452 11:55:23 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:11:18.452 00:11:18.452 00:11:18.452 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.452 http://cunit.sourceforge.net/ 00:11:18.452 00:11:18.452 00:11:18.452 Suite: ftl_io_suite 00:11:18.452 Test: test_completion ...passed 00:11:18.452 Test: test_multiple_ios ...passed 00:11:18.452 00:11:18.452 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.452 suites 1 1 n/a 0 0 00:11:18.452 tests 2 2 2 0 0 00:11:18.452 asserts 47 47 47 0 n/a 00:11:18.452 00:11:18.452 Elapsed time = 0.003 seconds 00:11:18.711 11:55:23 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:11:18.711 00:11:18.711 00:11:18.711 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.711 http://cunit.sourceforge.net/ 00:11:18.711 00:11:18.711 00:11:18.711 Suite: ftl_mngt 00:11:18.711 Test: test_next_step ...passed 00:11:18.711 Test: test_continue_step ...passed 00:11:18.711 Test: test_get_func_and_step_cntx_alloc ...passed 00:11:18.711 Test: test_fail_step ...passed 00:11:18.711 Test: test_mngt_call_and_call_rollback ...passed 00:11:18.711 Test: test_nested_process_failure ...passed 00:11:18.711 00:11:18.711 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.711 suites 1 1 n/a 0 0 00:11:18.711 tests 6 6 6 0 0 00:11:18.711 asserts 176 176 176 0 n/a 00:11:18.711 00:11:18.711 Elapsed time = 0.001 seconds 00:11:18.711 11:55:23 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:11:18.711 00:11:18.711 00:11:18.711 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.711 http://cunit.sourceforge.net/ 00:11:18.711 00:11:18.711 00:11:18.711 Suite: ftl_mempool 00:11:18.711 Test: test_ftl_mempool_create ...passed 00:11:18.711 Test: test_ftl_mempool_get_put ...passed 00:11:18.711 00:11:18.711 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.711 suites 1 1 n/a 0 0 00:11:18.711 tests 2 2 2 0 0 00:11:18.711 asserts 36 36 36 0 n/a 00:11:18.711 00:11:18.711 Elapsed time = 0.000 seconds 00:11:18.711 11:55:24 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:11:18.711 00:11:18.711 00:11:18.711 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.711 http://cunit.sourceforge.net/ 00:11:18.711 00:11:18.711 00:11:18.711 Suite: ftl_addr64_suite 00:11:18.711 Test: test_addr_cached ...passed 00:11:18.711 00:11:18.711 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.711 suites 1 1 n/a 0 0 00:11:18.711 tests 1 1 1 0 0 00:11:18.711 asserts 1536 1536 1536 0 n/a 00:11:18.711 00:11:18.711 Elapsed time = 0.000 seconds 00:11:18.711 11:55:24 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:11:18.711 00:11:18.711 00:11:18.711 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.711 http://cunit.sourceforge.net/ 00:11:18.711 00:11:18.711 00:11:18.711 Suite: ftl_sb 00:11:18.711 Test: test_sb_crc_v2 ...passed 00:11:18.711 Test: test_sb_crc_v3 ...passed 00:11:18.711 Test: test_sb_v3_md_layout ...[2024-11-29 11:55:24.060705] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:11:18.711 [2024-11-29 11:55:24.061186] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:11:18.711 [2024-11-29 11:55:24.061262] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:11:18.711 [2024-11-29 11:55:24.061325] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:11:18.711 [2024-11-29 11:55:24.061383] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:11:18.711 [2024-11-29 11:55:24.061510] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:11:18.711 [2024-11-29 11:55:24.061568] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:11:18.711 [2024-11-29 11:55:24.061650] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:11:18.711 [2024-11-29 11:55:24.061790] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:11:18.711 passed 00:11:18.711 Test: test_sb_v5_md_layout ...[2024-11-29 11:55:24.061866] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:11:18.711 [2024-11-29 11:55:24.061924] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:11:18.711 passed 00:11:18.711 00:11:18.712 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.712 suites 1 1 n/a 0 0 00:11:18.712 tests 4 4 4 0 0 00:11:18.712 asserts 148 148 148 0 n/a 00:11:18.712 00:11:18.712 Elapsed time = 0.003 seconds 00:11:18.712 11:55:24 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:11:18.712 00:11:18.712 00:11:18.712 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.712 http://cunit.sourceforge.net/ 00:11:18.712 00:11:18.712 00:11:18.712 Suite: ftl_layout_upgrade 00:11:18.712 Test: test_l2p_upgrade ...passed 00:11:18.712 00:11:18.712 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.712 suites 1 1 n/a 0 0 00:11:18.712 tests 1 1 1 0 0 00:11:18.712 asserts 140 140 140 0 n/a 00:11:18.712 00:11:18.712 Elapsed time = 0.001 seconds 00:11:18.712 00:11:18.712 real 0m0.486s 00:11:18.712 user 0m0.181s 00:11:18.712 sys 0m0.307s 00:11:18.712 ************************************ 00:11:18.712 END TEST unittest_ftl 00:11:18.712 ************************************ 00:11:18.712 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.712 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.712 11:55:24 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:11:18.712 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.712 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.712 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.712 ************************************ 00:11:18.712 START TEST unittest_accel 00:11:18.712 ************************************ 00:11:18.712 11:55:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:11:18.712 00:11:18.712 00:11:18.712 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.712 http://cunit.sourceforge.net/ 00:11:18.712 00:11:18.712 00:11:18.712 Suite: accel_sequence 00:11:18.712 Test: test_sequence_fill_copy ...passed 00:11:18.712 Test: test_sequence_abort ...passed 00:11:18.712 Test: test_sequence_append_error ...passed 00:11:18.712 Test: test_sequence_completion_error ...[2024-11-29 11:55:24.184870] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f7f80c237c0 00:11:18.712 passed 00:11:18.712 Test: test_sequence_decompress ...[2024-11-29 11:55:24.185217] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f7f80c237c0 00:11:18.712 [2024-11-29 11:55:24.185273] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f7f80c237c0 00:11:18.712 [2024-11-29 11:55:24.185350] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f7f80c237c0 00:11:18.712 passed 00:11:18.712 Test: test_sequence_reverse ...passed 00:11:18.712 Test: test_sequence_copy_elision ...passed 00:11:18.712 Test: test_sequence_accel_buffers ...passed 00:11:18.712 Test: test_sequence_memory_domain ...[2024-11-29 11:55:24.196021] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:11:18.712 passed 00:11:18.712 Test: test_sequence_module_memory_domain ...[2024-11-29 11:55:24.196238] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:11:18.712 passed 00:11:18.712 Test: test_sequence_crypto ...passed 00:11:18.712 Test: test_sequence_driver ...[2024-11-29 11:55:24.202651] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f7f7fe4f7c0 using driver: ut 00:11:18.712 [2024-11-29 11:55:24.202836] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f7f7fe4f7c0 through driver: ut 00:11:18.712 passed 00:11:18.712 Test: test_sequence_same_iovs ...passed 00:11:18.712 Test: test_sequence_crc32 ...passed 00:11:18.712 Suite: accel 00:11:18.712 Test: test_spdk_accel_task_complete ...passed 00:11:18.712 Test: test_get_task ...passed 00:11:18.712 Test: test_spdk_accel_submit_copy ...passed 00:11:18.712 Test: test_spdk_accel_submit_dualcast ...[2024-11-29 11:55:24.207274] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:11:18.712 [2024-11-29 11:55:24.207362] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:11:18.712 passed 00:11:18.712 Test: test_spdk_accel_submit_compare ...passed 00:11:18.712 Test: test_spdk_accel_submit_fill ...passed 00:11:18.712 Test: test_spdk_accel_submit_crc32c ...passed 00:11:18.712 Test: test_spdk_accel_submit_crc32cv ...passed 00:11:18.712 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:11:18.712 Test: test_spdk_accel_submit_xor ...passed 00:11:18.712 Test: test_spdk_accel_module_find_by_name ...passed 00:11:18.712 Test: test_spdk_accel_module_register ...passed 00:11:18.712 00:11:18.712 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.712 suites 2 2 n/a 0 0 00:11:18.712 tests 26 26 26 0 0 00:11:18.712 asserts 831 831 831 0 n/a 00:11:18.712 00:11:18.712 Elapsed time = 0.033 seconds 00:11:18.971 00:11:18.971 real 0m0.071s 00:11:18.971 user 0m0.040s 00:11:18.971 sys 0m0.031s 00:11:18.971 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.971 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 ************************************ 00:11:18.971 END TEST unittest_accel 00:11:18.971 ************************************ 00:11:18.971 11:55:24 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:11:18.971 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.971 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.971 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 ************************************ 00:11:18.971 START TEST unittest_ioat 00:11:18.971 ************************************ 00:11:18.971 11:55:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:11:18.971 00:11:18.971 00:11:18.971 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.971 http://cunit.sourceforge.net/ 00:11:18.971 00:11:18.971 00:11:18.971 Suite: ioat 00:11:18.971 Test: ioat_state_check ...passed 00:11:18.971 00:11:18.971 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.971 suites 1 1 n/a 0 0 00:11:18.971 tests 1 1 1 0 0 00:11:18.971 asserts 32 32 32 0 n/a 00:11:18.971 00:11:18.971 Elapsed time = 0.000 seconds 00:11:18.971 00:11:18.971 real 0m0.025s 00:11:18.971 user 0m0.008s 00:11:18.971 sys 0m0.017s 00:11:18.971 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.971 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 ************************************ 00:11:18.971 END TEST unittest_ioat 00:11:18.971 ************************************ 00:11:18.971 11:55:24 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:18.971 11:55:24 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:11:18.971 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.971 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.971 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 ************************************ 00:11:18.971 START TEST unittest_idxd_user 00:11:18.971 ************************************ 00:11:18.971 11:55:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:11:18.971 00:11:18.971 00:11:18.971 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.971 http://cunit.sourceforge.net/ 00:11:18.971 00:11:18.971 00:11:18.971 Suite: idxd_user 00:11:18.971 Test: test_idxd_wait_cmd ...[2024-11-29 11:55:24.369549] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:11:18.971 [2024-11-29 11:55:24.369975] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:11:18.971 passed 00:11:18.971 Test: test_idxd_reset_dev ...[2024-11-29 11:55:24.370152] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:11:18.971 passed 00:11:18.971 Test: test_idxd_group_config ...[2024-11-29 11:55:24.370210] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:11:18.971 passed 00:11:18.971 Test: test_idxd_wq_config ...passed 00:11:18.971 00:11:18.971 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.971 suites 1 1 n/a 0 0 00:11:18.971 tests 4 4 4 0 0 00:11:18.971 asserts 20 20 20 0 n/a 00:11:18.971 00:11:18.971 Elapsed time = 0.001 seconds 00:11:18.971 00:11:18.971 real 0m0.031s 00:11:18.971 user 0m0.021s 00:11:18.971 sys 0m0.011s 00:11:18.971 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.971 ************************************ 00:11:18.971 END TEST unittest_idxd_user 00:11:18.971 ************************************ 00:11:18.971 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 11:55:24 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:11:18.971 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.971 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.971 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 ************************************ 00:11:18.971 START TEST unittest_iscsi 00:11:18.971 ************************************ 00:11:18.971 11:55:24 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:11:18.971 11:55:24 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:11:18.971 00:11:18.971 00:11:18.971 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.971 http://cunit.sourceforge.net/ 00:11:18.971 00:11:18.971 00:11:18.971 Suite: conn_suite 00:11:18.971 Test: read_task_split_in_order_case ...passed 00:11:18.971 Test: read_task_split_reverse_order_case ...passed 00:11:18.971 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:11:18.971 Test: process_non_read_task_completion_test ...passed 00:11:18.971 Test: free_tasks_on_connection ...passed 00:11:18.971 Test: free_tasks_with_queued_datain ...passed 00:11:18.971 Test: abort_queued_datain_task_test ...passed 00:11:18.971 Test: abort_queued_datain_tasks_test ...passed 00:11:18.971 00:11:18.971 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.971 suites 1 1 n/a 0 0 00:11:18.971 tests 8 8 8 0 0 00:11:18.971 asserts 230 230 230 0 n/a 00:11:18.971 00:11:18.971 Elapsed time = 0.000 seconds 00:11:18.971 11:55:24 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:11:18.971 00:11:18.971 00:11:18.971 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.971 http://cunit.sourceforge.net/ 00:11:18.971 00:11:18.971 00:11:18.971 Suite: iscsi_suite 00:11:18.971 Test: param_negotiation_test ...passed 00:11:18.971 Test: list_negotiation_test ...passed 00:11:18.971 Test: parse_valid_test ...passed 00:11:18.971 Test: parse_invalid_test ...[2024-11-29 11:55:24.477672] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:11:18.971 [2024-11-29 11:55:24.478041] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:11:18.971 [2024-11-29 11:55:24.478118] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:11:18.971 [2024-11-29 11:55:24.478227] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:11:18.971 [2024-11-29 11:55:24.478419] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:11:18.971 [2024-11-29 11:55:24.478507] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:11:18.971 [2024-11-29 11:55:24.478678] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:11:18.971 passed 00:11:18.971 00:11:18.971 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.971 suites 1 1 n/a 0 0 00:11:18.971 tests 4 4 4 0 0 00:11:18.971 asserts 161 161 161 0 n/a 00:11:18.971 00:11:18.971 Elapsed time = 0.006 seconds 00:11:19.231 11:55:24 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:11:19.231 00:11:19.231 00:11:19.231 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.231 http://cunit.sourceforge.net/ 00:11:19.231 00:11:19.231 00:11:19.231 Suite: iscsi_target_node_suite 00:11:19.231 Test: add_lun_test_cases ...[2024-11-29 11:55:24.507677] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:11:19.231 [2024-11-29 11:55:24.508020] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:11:19.231 [2024-11-29 11:55:24.508125] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:11:19.231 [2024-11-29 11:55:24.508169] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:11:19.231 [2024-11-29 11:55:24.508205] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:11:19.231 passed 00:11:19.231 Test: allow_any_allowed ...passed 00:11:19.231 Test: allow_ipv6_allowed ...passed 00:11:19.231 Test: allow_ipv6_denied ...passed 00:11:19.231 Test: allow_ipv6_invalid ...passed 00:11:19.231 Test: allow_ipv4_allowed ...passed 00:11:19.231 Test: allow_ipv4_denied ...passed 00:11:19.231 Test: allow_ipv4_invalid ...passed 00:11:19.231 Test: node_access_allowed ...passed 00:11:19.231 Test: node_access_denied_by_empty_netmask ...passed 00:11:19.231 Test: node_access_multi_initiator_groups_cases ...passed 00:11:19.231 Test: allow_iscsi_name_multi_maps_case ...passed 00:11:19.231 Test: chap_param_test_cases ...[2024-11-29 11:55:24.508621] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:11:19.231 [2024-11-29 11:55:24.508668] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:11:19.231 [2024-11-29 11:55:24.508728] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:11:19.231 [2024-11-29 11:55:24.508757] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:11:19.231 passed 00:11:19.231 00:11:19.231 [2024-11-29 11:55:24.508799] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:11:19.231 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.231 suites 1 1 n/a 0 0 00:11:19.231 tests 13 13 13 0 0 00:11:19.231 asserts 50 50 50 0 n/a 00:11:19.231 00:11:19.231 Elapsed time = 0.001 seconds 00:11:19.231 11:55:24 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:11:19.231 00:11:19.231 00:11:19.231 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.231 http://cunit.sourceforge.net/ 00:11:19.231 00:11:19.231 00:11:19.231 Suite: iscsi_suite 00:11:19.231 Test: op_login_check_target_test ...passed 00:11:19.231 Test: op_login_session_normal_test ...[2024-11-29 11:55:24.539929] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:11:19.231 [2024-11-29 11:55:24.540258] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:11:19.231 [2024-11-29 11:55:24.540309] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:11:19.231 [2024-11-29 11:55:24.540342] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:11:19.231 [2024-11-29 11:55:24.540406] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:11:19.231 passed 00:11:19.231 Test: maxburstlength_test ...[2024-11-29 11:55:24.540500] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:11:19.231 [2024-11-29 11:55:24.540584] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:11:19.231 [2024-11-29 11:55:24.540641] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:11:19.231 [2024-11-29 11:55:24.540849] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:11:19.231 [2024-11-29 11:55:24.540908] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:11:19.231 passed 00:11:19.231 Test: underflow_for_read_transfer_test ...passed 00:11:19.231 Test: underflow_for_zero_read_transfer_test ...passed 00:11:19.231 Test: underflow_for_request_sense_test ...passed 00:11:19.231 Test: underflow_for_check_condition_test ...passed 00:11:19.231 Test: add_transfer_task_test ...passed 00:11:19.231 Test: get_transfer_task_test ...passed 00:11:19.232 Test: del_transfer_task_test ...passed 00:11:19.232 Test: clear_all_transfer_tasks_test ...passed 00:11:19.232 Test: build_iovs_test ...passed 00:11:19.232 Test: build_iovs_with_md_test ...passed 00:11:19.232 Test: pdu_hdr_op_login_test ...[2024-11-29 11:55:24.542111] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:11:19.232 passed 00:11:19.232 Test: pdu_hdr_op_text_test ...[2024-11-29 11:55:24.542204] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:11:19.232 [2024-11-29 11:55:24.542269] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:11:19.232 [2024-11-29 11:55:24.542372] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:11:19.232 passed 00:11:19.232 Test: pdu_hdr_op_logout_test ...[2024-11-29 11:55:24.542459] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:11:19.232 [2024-11-29 11:55:24.542500] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:11:19.232 [2024-11-29 11:55:24.542568] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:11:19.232 passed 00:11:19.232 Test: pdu_hdr_op_scsi_test ...[2024-11-29 11:55:24.542700] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:11:19.232 [2024-11-29 11:55:24.542734] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:11:19.232 [2024-11-29 11:55:24.542776] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:11:19.232 [2024-11-29 11:55:24.542859] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:11:19.232 [2024-11-29 11:55:24.542942] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:11:19.232 [2024-11-29 11:55:24.543092] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:11:19.232 passed 00:11:19.232 Test: pdu_hdr_op_task_mgmt_test ...[2024-11-29 11:55:24.543179] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:11:19.232 [2024-11-29 11:55:24.543249] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:11:19.232 passed 00:11:19.232 Test: pdu_hdr_op_nopout_test ...[2024-11-29 11:55:24.543429] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:11:19.232 [2024-11-29 11:55:24.543501] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:11:19.232 [2024-11-29 11:55:24.543532] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:11:19.232 passed 00:11:19.232 Test: pdu_hdr_op_data_test ...[2024-11-29 11:55:24.543560] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:11:19.232 [2024-11-29 11:55:24.543612] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:11:19.232 [2024-11-29 11:55:24.543666] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:11:19.232 [2024-11-29 11:55:24.543734] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:11:19.232 [2024-11-29 11:55:24.543790] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:11:19.232 [2024-11-29 11:55:24.543849] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:11:19.232 passed 00:11:19.232 Test: empty_text_with_cbit_test ...[2024-11-29 11:55:24.543916] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:11:19.232 [2024-11-29 11:55:24.543951] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:11:19.232 passed 00:11:19.232 Test: pdu_payload_read_test ...[2024-11-29 11:55:24.545688] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:11:19.232 passed 00:11:19.232 Test: data_out_pdu_sequence_test ...passed 00:11:19.232 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:11:19.232 00:11:19.232 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.232 suites 1 1 n/a 0 0 00:11:19.232 tests 24 24 24 0 0 00:11:19.232 asserts 150253 150253 150253 0 n/a 00:11:19.232 00:11:19.232 Elapsed time = 0.014 seconds 00:11:19.232 11:55:24 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:11:19.232 00:11:19.232 00:11:19.232 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.232 http://cunit.sourceforge.net/ 00:11:19.232 00:11:19.232 00:11:19.232 Suite: init_grp_suite 00:11:19.232 Test: create_initiator_group_success_case ...passed 00:11:19.232 Test: find_initiator_group_success_case ...passed 00:11:19.232 Test: register_initiator_group_twice_case ...passed 00:11:19.232 Test: add_initiator_name_success_case ...passed 00:11:19.232 Test: add_initiator_name_fail_case ...[2024-11-29 11:55:24.581818] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:11:19.232 passed 00:11:19.232 Test: delete_all_initiator_names_success_case ...passed 00:11:19.232 Test: add_netmask_success_case ...passed 00:11:19.232 Test: add_netmask_fail_case ...[2024-11-29 11:55:24.582499] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:11:19.232 passed 00:11:19.232 Test: delete_all_netmasks_success_case ...passed 00:11:19.232 Test: initiator_name_overwrite_all_to_any_case ...passed 00:11:19.232 Test: netmask_overwrite_all_to_any_case ...passed 00:11:19.232 Test: add_delete_initiator_names_case ...passed 00:11:19.232 Test: add_duplicated_initiator_names_case ...passed 00:11:19.232 Test: delete_nonexisting_initiator_names_case ...passed 00:11:19.232 Test: add_delete_netmasks_case ...passed 00:11:19.232 Test: add_duplicated_netmasks_case ...passed 00:11:19.232 Test: delete_nonexisting_netmasks_case ...passed 00:11:19.232 00:11:19.232 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.232 suites 1 1 n/a 0 0 00:11:19.232 tests 17 17 17 0 0 00:11:19.232 asserts 108 108 108 0 n/a 00:11:19.232 00:11:19.232 Elapsed time = 0.002 seconds 00:11:19.232 11:55:24 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:11:19.232 00:11:19.232 00:11:19.232 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.232 http://cunit.sourceforge.net/ 00:11:19.232 00:11:19.232 00:11:19.232 Suite: portal_grp_suite 00:11:19.232 Test: portal_create_ipv4_normal_case ...passed 00:11:19.232 Test: portal_create_ipv6_normal_case ...passed 00:11:19.232 Test: portal_create_ipv4_wildcard_case ...passed 00:11:19.232 Test: portal_create_ipv6_wildcard_case ...passed 00:11:19.232 Test: portal_create_twice_case ...[2024-11-29 11:55:24.615494] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:11:19.232 passed 00:11:19.232 Test: portal_grp_register_unregister_case ...passed 00:11:19.232 Test: portal_grp_register_twice_case ...passed 00:11:19.232 Test: portal_grp_add_delete_case ...passed 00:11:19.232 Test: portal_grp_add_delete_twice_case ...passed 00:11:19.232 00:11:19.232 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.232 suites 1 1 n/a 0 0 00:11:19.232 tests 9 9 9 0 0 00:11:19.232 asserts 44 44 44 0 n/a 00:11:19.232 00:11:19.232 Elapsed time = 0.003 seconds 00:11:19.232 00:11:19.232 real 0m0.202s 00:11:19.232 user 0m0.123s 00:11:19.232 sys 0m0.081s 00:11:19.232 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:19.232 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.232 ************************************ 00:11:19.232 END TEST unittest_iscsi 00:11:19.232 ************************************ 00:11:19.232 11:55:24 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:11:19.232 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.232 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.232 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.232 ************************************ 00:11:19.232 START TEST unittest_json 00:11:19.232 ************************************ 00:11:19.232 11:55:24 -- common/autotest_common.sh@1114 -- # unittest_json 00:11:19.232 11:55:24 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:11:19.232 00:11:19.232 00:11:19.232 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.232 http://cunit.sourceforge.net/ 00:11:19.232 00:11:19.232 00:11:19.232 Suite: json 00:11:19.232 Test: test_parse_literal ...passed 00:11:19.232 Test: test_parse_string_simple ...passed 00:11:19.232 Test: test_parse_string_control_chars ...passed 00:11:19.232 Test: test_parse_string_utf8 ...passed 00:11:19.232 Test: test_parse_string_escapes_twochar ...passed 00:11:19.232 Test: test_parse_string_escapes_unicode ...passed 00:11:19.232 Test: test_parse_number ...passed 00:11:19.232 Test: test_parse_array ...passed 00:11:19.232 Test: test_parse_object ...passed 00:11:19.232 Test: test_parse_nesting ...passed 00:11:19.232 Test: test_parse_comment ...passed 00:11:19.232 00:11:19.232 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.232 suites 1 1 n/a 0 0 00:11:19.232 tests 11 11 11 0 0 00:11:19.232 asserts 1516 1516 1516 0 n/a 00:11:19.232 00:11:19.232 Elapsed time = 0.002 seconds 00:11:19.232 11:55:24 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:11:19.232 00:11:19.232 00:11:19.232 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.232 http://cunit.sourceforge.net/ 00:11:19.232 00:11:19.233 00:11:19.233 Suite: json 00:11:19.233 Test: test_strequal ...passed 00:11:19.233 Test: test_num_to_uint16 ...passed 00:11:19.233 Test: test_num_to_int32 ...passed 00:11:19.233 Test: test_num_to_uint64 ...passed 00:11:19.233 Test: test_decode_object ...passed 00:11:19.233 Test: test_decode_array ...passed 00:11:19.233 Test: test_decode_bool ...passed 00:11:19.233 Test: test_decode_uint16 ...passed 00:11:19.233 Test: test_decode_int32 ...passed 00:11:19.233 Test: test_decode_uint32 ...passed 00:11:19.233 Test: test_decode_uint64 ...passed 00:11:19.233 Test: test_decode_string ...passed 00:11:19.233 Test: test_decode_uuid ...passed 00:11:19.233 Test: test_find ...passed 00:11:19.233 Test: test_find_array ...passed 00:11:19.233 Test: test_iterating ...passed 00:11:19.233 Test: test_free_object ...passed 00:11:19.233 00:11:19.233 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.233 suites 1 1 n/a 0 0 00:11:19.233 tests 17 17 17 0 0 00:11:19.233 asserts 236 236 236 0 n/a 00:11:19.233 00:11:19.233 Elapsed time = 0.001 seconds 00:11:19.492 11:55:24 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:11:19.492 00:11:19.492 00:11:19.492 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.492 http://cunit.sourceforge.net/ 00:11:19.492 00:11:19.492 00:11:19.492 Suite: json 00:11:19.492 Test: test_write_literal ...passed 00:11:19.492 Test: test_write_string_simple ...passed 00:11:19.492 Test: test_write_string_escapes ...passed 00:11:19.492 Test: test_write_string_utf16le ...passed 00:11:19.492 Test: test_write_number_int32 ...passed 00:11:19.492 Test: test_write_number_uint32 ...passed 00:11:19.492 Test: test_write_number_uint128 ...passed 00:11:19.492 Test: test_write_string_number_uint128 ...passed 00:11:19.492 Test: test_write_number_int64 ...passed 00:11:19.492 Test: test_write_number_uint64 ...passed 00:11:19.492 Test: test_write_number_double ...passed 00:11:19.492 Test: test_write_uuid ...passed 00:11:19.492 Test: test_write_array ...passed 00:11:19.492 Test: test_write_object ...passed 00:11:19.492 Test: test_write_nesting ...passed 00:11:19.492 Test: test_write_val ...passed 00:11:19.492 00:11:19.492 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.492 suites 1 1 n/a 0 0 00:11:19.492 tests 16 16 16 0 0 00:11:19.492 asserts 918 918 918 0 n/a 00:11:19.492 00:11:19.492 Elapsed time = 0.004 seconds 00:11:19.492 11:55:24 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:11:19.492 00:11:19.492 00:11:19.492 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.492 http://cunit.sourceforge.net/ 00:11:19.492 00:11:19.492 00:11:19.492 Suite: jsonrpc 00:11:19.492 Test: test_parse_request ...passed 00:11:19.492 Test: test_parse_request_streaming ...passed 00:11:19.492 00:11:19.492 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.492 suites 1 1 n/a 0 0 00:11:19.492 tests 2 2 2 0 0 00:11:19.492 asserts 289 289 289 0 n/a 00:11:19.492 00:11:19.492 Elapsed time = 0.004 seconds 00:11:19.492 00:11:19.492 real 0m0.132s 00:11:19.492 user 0m0.081s 00:11:19.492 sys 0m0.048s 00:11:19.492 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:19.492 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.492 ************************************ 00:11:19.492 END TEST unittest_json 00:11:19.492 ************************************ 00:11:19.492 11:55:24 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:11:19.492 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.492 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.492 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.492 ************************************ 00:11:19.492 START TEST unittest_rpc 00:11:19.492 ************************************ 00:11:19.492 11:55:24 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:11:19.492 11:55:24 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:11:19.492 00:11:19.492 00:11:19.492 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.492 http://cunit.sourceforge.net/ 00:11:19.492 00:11:19.492 00:11:19.492 Suite: rpc 00:11:19.492 Test: test_jsonrpc_handler ...passed 00:11:19.492 Test: test_spdk_rpc_is_method_allowed ...passed 00:11:19.492 Test: test_rpc_get_methods ...[2024-11-29 11:55:24.874122] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:11:19.492 passed 00:11:19.492 Test: test_rpc_spdk_get_version ...passed 00:11:19.492 Test: test_spdk_rpc_listen_close ...passed 00:11:19.492 00:11:19.492 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.492 suites 1 1 n/a 0 0 00:11:19.492 tests 5 5 5 0 0 00:11:19.492 asserts 20 20 20 0 n/a 00:11:19.492 00:11:19.492 Elapsed time = 0.001 seconds 00:11:19.492 00:11:19.492 real 0m0.029s 00:11:19.492 user 0m0.005s 00:11:19.492 sys 0m0.023s 00:11:19.492 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:19.492 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.492 ************************************ 00:11:19.492 END TEST unittest_rpc 00:11:19.492 ************************************ 00:11:19.492 11:55:24 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:11:19.492 11:55:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.492 11:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.492 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.492 ************************************ 00:11:19.492 START TEST unittest_notify 00:11:19.492 ************************************ 00:11:19.492 11:55:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:11:19.492 00:11:19.492 00:11:19.492 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.492 http://cunit.sourceforge.net/ 00:11:19.492 00:11:19.492 00:11:19.492 Suite: app_suite 00:11:19.492 Test: notify ...passed 00:11:19.492 00:11:19.492 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.492 suites 1 1 n/a 0 0 00:11:19.492 tests 1 1 1 0 0 00:11:19.492 asserts 13 13 13 0 n/a 00:11:19.492 00:11:19.492 Elapsed time = 0.000 seconds 00:11:19.492 00:11:19.492 real 0m0.031s 00:11:19.492 user 0m0.022s 00:11:19.492 sys 0m0.009s 00:11:19.492 11:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:19.492 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.492 ************************************ 00:11:19.492 END TEST unittest_notify 00:11:19.492 ************************************ 00:11:19.750 11:55:25 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:11:19.750 11:55:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.750 11:55:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.750 11:55:25 -- common/autotest_common.sh@10 -- # set +x 00:11:19.750 ************************************ 00:11:19.750 START TEST unittest_nvme 00:11:19.750 ************************************ 00:11:19.750 11:55:25 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:11:19.750 11:55:25 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:11:19.750 00:11:19.750 00:11:19.750 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.750 http://cunit.sourceforge.net/ 00:11:19.750 00:11:19.750 00:11:19.750 Suite: nvme 00:11:19.750 Test: test_opc_data_transfer ...passed 00:11:19.750 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:11:19.750 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:11:19.750 Test: test_trid_parse_and_compare ...[2024-11-29 11:55:25.028773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:11:19.750 [2024-11-29 11:55:25.029190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:11:19.750 [2024-11-29 11:55:25.029339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:11:19.750 [2024-11-29 11:55:25.029399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:11:19.750 [2024-11-29 11:55:25.029448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:11:19.750 [2024-11-29 11:55:25.029577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:11:19.750 passed 00:11:19.750 Test: test_trid_trtype_str ...passed 00:11:19.750 Test: test_trid_adrfam_str ...passed 00:11:19.750 Test: test_nvme_ctrlr_probe ...[2024-11-29 11:55:25.029848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:11:19.750 passed 00:11:19.750 Test: test_spdk_nvme_probe ...[2024-11-29 11:55:25.029952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:11:19.750 [2024-11-29 11:55:25.029987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:11:19.750 [2024-11-29 11:55:25.030072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:11:19.750 [2024-11-29 11:55:25.030105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:11:19.750 passed 00:11:19.750 Test: test_spdk_nvme_connect ...[2024-11-29 11:55:25.030178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:11:19.750 [2024-11-29 11:55:25.030526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:11:19.750 [2024-11-29 11:55:25.030596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:11:19.750 passed 00:11:19.750 Test: test_nvme_ctrlr_probe_internal ...[2024-11-29 11:55:25.030706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:11:19.751 [2024-11-29 11:55:25.030747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:11:19.751 passed 00:11:19.751 Test: test_nvme_init_controllers ...passed 00:11:19.751 Test: test_nvme_driver_init ...[2024-11-29 11:55:25.030815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:11:19.751 [2024-11-29 11:55:25.030921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:11:19.751 [2024-11-29 11:55:25.030966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:11:19.751 [2024-11-29 11:55:25.144958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:11:19.751 passed 00:11:19.751 Test: test_spdk_nvme_detach ...passed 00:11:19.751 Test: test_nvme_completion_poll_cb ...passed 00:11:19.751 Test: test_nvme_user_copy_cmd_complete ...passed[2024-11-29 11:55:25.145191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:11:19.751 00:11:19.751 Test: test_nvme_allocate_request_null ...passed 00:11:19.751 Test: test_nvme_allocate_request ...passed 00:11:19.751 Test: test_nvme_free_request ...passed 00:11:19.751 Test: test_nvme_allocate_request_user_copy ...passed 00:11:19.751 Test: test_nvme_robust_mutex_init_shared ...passed 00:11:19.751 Test: test_nvme_request_check_timeout ...passed 00:11:19.751 Test: test_nvme_wait_for_completion ...passed 00:11:19.751 Test: test_spdk_nvme_parse_func ...passed 00:11:19.751 Test: test_spdk_nvme_detach_async ...passed 00:11:19.751 Test: test_nvme_parse_addr ...[2024-11-29 11:55:25.146102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:11:19.751 passed 00:11:19.751 00:11:19.751 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.751 suites 1 1 n/a 0 0 00:11:19.751 tests 25 25 25 0 0 00:11:19.751 asserts 326 326 326 0 n/a 00:11:19.751 00:11:19.751 Elapsed time = 0.007 seconds 00:11:19.751 11:55:25 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:11:19.751 00:11:19.751 00:11:19.751 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.751 http://cunit.sourceforge.net/ 00:11:19.751 00:11:19.751 00:11:19.751 Suite: nvme_ctrlr 00:11:19.751 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-29 11:55:25.184062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 passed 00:11:19.751 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-29 11:55:25.186042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 passed 00:11:19.751 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-29 11:55:25.187373] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 passed 00:11:19.751 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-29 11:55:25.188645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 passed 00:11:19.751 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-29 11:55:25.189908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 [2024-11-29 11:55:25.191128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-29 11:55:25.192441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-29 11:55:25.193638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:11:19.751 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-29 11:55:25.196175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 [2024-11-29 11:55:25.198557] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-29 11:55:25.199833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:11:19.751 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-29 11:55:25.202407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 [2024-11-29 11:55:25.203674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-29 11:55:25.206147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:11:19.751 Test: test_nvme_ctrlr_init_delay ...[2024-11-29 11:55:25.208816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 passed 00:11:19.751 Test: test_alloc_io_qpair_rr_1 ...[2024-11-29 11:55:25.210194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 [2024-11-29 11:55:25.210398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:11:19.751 [2024-11-29 11:55:25.210688] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:11:19.751 [2024-11-29 11:55:25.210825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:11:19.751 [2024-11-29 11:55:25.210935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:11:19.751 passed 00:11:19.751 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:11:19.751 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:11:19.751 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-29 11:55:25.211217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 passed 00:11:19.751 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-29 11:55:25.211561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:19.751 [2024-11-29 11:55:25.211797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:11:19.751 passed 00:11:19.751 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-29 11:55:25.212238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:11:19.751 [2024-11-29 11:55:25.212514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:11:19.751 [2024-11-29 11:55:25.212694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:11:19.751 [2024-11-29 11:55:25.212812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:11:19.751 passed 00:11:19.751 Test: test_nvme_ctrlr_fail ...[2024-11-29 11:55:25.212944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:11:19.751 passed 00:11:19.751 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:11:19.751 Test: test_nvme_ctrlr_set_supported_features ...passed 00:11:19.751 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:11:19.751 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-29 11:55:25.213441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:11:20.319 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:11:20.319 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:11:20.319 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-29 11:55:25.538993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-29 11:55:25.546482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-29 11:55:25.547790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 [2024-11-29 11:55:25.547877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:11:20.319 passed 00:11:20.319 Test: test_alloc_io_qpair_fail ...[2024-11-29 11:55:25.549049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_add_remove_process ...passed 00:11:20.319 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:11:20.319 Test: test_nvme_ctrlr_set_state ...[2024-11-29 11:55:25.549239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-29 11:55:25.549439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:11:20.319 [2024-11-29 11:55:25.549539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-29 11:55:25.573627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-29 11:55:25.619822] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_reset ...[2024-11-29 11:55:25.621490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_aer_callback ...[2024-11-29 11:55:25.621936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.319 passed 00:11:20.319 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-29 11:55:25.623497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:11:20.320 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:11:20.320 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-29 11:55:25.625458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:11:20.320 Test: test_nvme_ctrlr_ana_resize ...[2024-11-29 11:55:25.626953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:11:20.320 Test: test_nvme_transport_ctrlr_ready ...[2024-11-29 11:55:25.628692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ctrlr_disable ...[2024-11-29 11:55:25.628775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:11:20.320 [2024-11-29 11:55:25.628847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:11:20.320 passed 00:11:20.320 00:11:20.320 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.320 suites 1 1 n/a 0 0 00:11:20.320 tests 43 43 43 0 0 00:11:20.320 asserts 10418 10418 10418 0 n/a 00:11:20.320 00:11:20.320 Elapsed time = 0.405 seconds 00:11:20.320 11:55:25 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:11:20.320 00:11:20.320 00:11:20.320 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.320 http://cunit.sourceforge.net/ 00:11:20.320 00:11:20.320 00:11:20.320 Suite: nvme_ctrlr_cmd 00:11:20.320 Test: test_get_log_pages ...passed 00:11:20.320 Test: test_set_feature_cmd ...passed 00:11:20.320 Test: test_set_feature_ns_cmd ...passed 00:11:20.320 Test: test_get_feature_cmd ...passed 00:11:20.320 Test: test_get_feature_ns_cmd ...passed 00:11:20.320 Test: test_abort_cmd ...passed 00:11:20.320 Test: test_set_host_id_cmds ...[2024-11-29 11:55:25.685095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:11:20.320 passed 00:11:20.320 Test: test_io_cmd_raw_no_payload_build ...passed 00:11:20.320 Test: test_io_raw_cmd ...passed 00:11:20.320 Test: test_io_raw_cmd_with_md ...passed 00:11:20.320 Test: test_namespace_attach ...passed 00:11:20.320 Test: test_namespace_detach ...passed 00:11:20.320 Test: test_namespace_create ...passed 00:11:20.320 Test: test_namespace_delete ...passed 00:11:20.320 Test: test_doorbell_buffer_config ...passed 00:11:20.320 Test: test_format_nvme ...passed 00:11:20.320 Test: test_fw_commit ...passed 00:11:20.320 Test: test_fw_image_download ...passed 00:11:20.320 Test: test_sanitize ...passed 00:11:20.320 Test: test_directive ...passed 00:11:20.320 Test: test_nvme_request_add_abort ...passed 00:11:20.320 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:11:20.320 Test: test_nvme_ctrlr_cmd_identify ...passed 00:11:20.320 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:11:20.320 00:11:20.320 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.320 suites 1 1 n/a 0 0 00:11:20.320 tests 24 24 24 0 0 00:11:20.320 asserts 198 198 198 0 n/a 00:11:20.320 00:11:20.320 Elapsed time = 0.001 seconds 00:11:20.320 11:55:25 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:11:20.320 00:11:20.320 00:11:20.320 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.320 http://cunit.sourceforge.net/ 00:11:20.320 00:11:20.320 00:11:20.320 Suite: nvme_ctrlr_cmd 00:11:20.320 Test: test_geometry_cmd ...passed 00:11:20.320 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:11:20.320 00:11:20.320 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.320 suites 1 1 n/a 0 0 00:11:20.320 tests 2 2 2 0 0 00:11:20.320 asserts 7 7 7 0 n/a 00:11:20.320 00:11:20.320 Elapsed time = 0.000 seconds 00:11:20.320 11:55:25 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:11:20.320 00:11:20.320 00:11:20.320 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.320 http://cunit.sourceforge.net/ 00:11:20.320 00:11:20.320 00:11:20.320 Suite: nvme 00:11:20.320 Test: test_nvme_ns_construct ...passed 00:11:20.320 Test: test_nvme_ns_uuid ...passed 00:11:20.320 Test: test_nvme_ns_csi ...passed 00:11:20.320 Test: test_nvme_ns_data ...passed 00:11:20.320 Test: test_nvme_ns_set_identify_data ...passed 00:11:20.320 Test: test_spdk_nvme_ns_get_values ...passed 00:11:20.320 Test: test_spdk_nvme_ns_is_active ...passed 00:11:20.320 Test: spdk_nvme_ns_supports ...passed 00:11:20.320 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:11:20.320 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:11:20.320 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:11:20.320 Test: test_nvme_ns_find_id_desc ...passed 00:11:20.320 00:11:20.320 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.320 suites 1 1 n/a 0 0 00:11:20.320 tests 12 12 12 0 0 00:11:20.320 asserts 83 83 83 0 n/a 00:11:20.320 00:11:20.320 Elapsed time = 0.000 seconds 00:11:20.320 11:55:25 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:11:20.320 00:11:20.320 00:11:20.320 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.320 http://cunit.sourceforge.net/ 00:11:20.320 00:11:20.320 00:11:20.320 Suite: nvme_ns_cmd 00:11:20.320 Test: split_test ...passed 00:11:20.320 Test: split_test2 ...passed 00:11:20.320 Test: split_test3 ...passed 00:11:20.320 Test: split_test4 ...passed 00:11:20.320 Test: test_nvme_ns_cmd_flush ...passed 00:11:20.320 Test: test_nvme_ns_cmd_dataset_management ...passed 00:11:20.320 Test: test_nvme_ns_cmd_copy ...passed 00:11:20.320 Test: test_io_flags ...[2024-11-29 11:55:25.766458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:11:20.320 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:11:20.320 Test: test_nvme_ns_cmd_reservation_register ...passed 00:11:20.320 Test: test_nvme_ns_cmd_reservation_release ...passed 00:11:20.320 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:11:20.320 Test: test_nvme_ns_cmd_reservation_report ...passed 00:11:20.320 Test: test_cmd_child_request ...passed 00:11:20.320 Test: test_nvme_ns_cmd_readv ...passed 00:11:20.320 Test: test_nvme_ns_cmd_read_with_md ...passed 00:11:20.320 Test: test_nvme_ns_cmd_writev ...[2024-11-29 11:55:25.767387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ns_cmd_write_with_md ...passed 00:11:20.320 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:11:20.320 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:11:20.320 Test: test_nvme_ns_cmd_comparev ...passed 00:11:20.320 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:11:20.320 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:11:20.320 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:11:20.320 Test: test_nvme_ns_cmd_setup_request ...passed 00:11:20.320 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:11:20.320 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:11:20.320 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-11-29 11:55:25.768782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:11:20.320 passed 00:11:20.320 Test: test_nvme_ns_cmd_verify ...passed 00:11:20.320 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:11:20.320 Test: test_nvme_ns_cmd_io_mgmt_recv ...[2024-11-29 11:55:25.768868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:11:20.320 passed 00:11:20.320 00:11:20.320 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.320 suites 1 1 n/a 0 0 00:11:20.320 tests 32 32 32 0 0 00:11:20.320 asserts 550 550 550 0 n/a 00:11:20.320 00:11:20.320 Elapsed time = 0.003 seconds 00:11:20.320 11:55:25 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:11:20.320 00:11:20.320 00:11:20.320 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.320 http://cunit.sourceforge.net/ 00:11:20.320 00:11:20.320 00:11:20.320 Suite: nvme_ns_cmd 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:11:20.320 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:11:20.321 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:11:20.321 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:11:20.321 00:11:20.321 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.321 suites 1 1 n/a 0 0 00:11:20.321 tests 12 12 12 0 0 00:11:20.321 asserts 123 123 123 0 n/a 00:11:20.321 00:11:20.321 Elapsed time = 0.001 seconds 00:11:20.321 11:55:25 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:11:20.321 00:11:20.321 00:11:20.321 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.321 http://cunit.sourceforge.net/ 00:11:20.321 00:11:20.321 00:11:20.321 Suite: nvme_qpair 00:11:20.321 Test: test3 ...passed 00:11:20.321 Test: test_ctrlr_failed ...passed 00:11:20.321 Test: struct_packing ...passed 00:11:20.321 Test: test_nvme_qpair_process_completions ...[2024-11-29 11:55:25.826064] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:11:20.321 [2024-11-29 11:55:25.826393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:11:20.321 [2024-11-29 11:55:25.826450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:20.321 passed 00:11:20.321 Test: test_nvme_completion_is_retry ...passed 00:11:20.321 Test: test_get_status_string ...passed 00:11:20.321 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-11-29 11:55:25.826535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:11:20.321 passed 00:11:20.321 Test: test_nvme_qpair_submit_request ...passed 00:11:20.321 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:11:20.321 Test: test_nvme_qpair_manual_complete_request ...passed 00:11:20.321 Test: test_nvme_qpair_init_deinit ...passed 00:11:20.321 Test: test_nvme_get_sgl_print_info ...[2024-11-29 11:55:25.826902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:11:20.321 passed 00:11:20.321 00:11:20.321 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.321 suites 1 1 n/a 0 0 00:11:20.321 tests 12 12 12 0 0 00:11:20.321 asserts 154 154 154 0 n/a 00:11:20.321 00:11:20.321 Elapsed time = 0.001 seconds 00:11:20.579 11:55:25 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:11:20.579 00:11:20.579 00:11:20.579 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.579 http://cunit.sourceforge.net/ 00:11:20.579 00:11:20.579 00:11:20.579 Suite: nvme_pcie 00:11:20.579 Test: test_prp_list_append ...[2024-11-29 11:55:25.855156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:11:20.579 [2024-11-29 11:55:25.855480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:11:20.579 [2024-11-29 11:55:25.855529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:11:20.579 [2024-11-29 11:55:25.855743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:11:20.579 passed 00:11:20.579 Test: test_nvme_pcie_hotplug_monitor ...passed 00:11:20.579 Test: test_shadow_doorbell_update ...passed 00:11:20.579 Test: test_build_contig_hw_sgl_request ...[2024-11-29 11:55:25.855830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:11:20.579 passed 00:11:20.579 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:11:20.579 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:11:20.579 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:11:20.579 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:11:20.579 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:11:20.579 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:11:20.579 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-29 11:55:25.855973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:11:20.579 passed 00:11:20.579 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:11:20.579 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-11-29 11:55:25.856046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:11:20.579 [2024-11-29 11:55:25.856115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:11:20.579 passed 00:11:20.579 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:11:20.579 00:11:20.579 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.579 suites 1 1 n/a 0 0 00:11:20.579 tests 14 14 14 0 0 00:11:20.579 asserts 235 235 235 0 n/a 00:11:20.579 00:11:20.579 Elapsed time = 0.001 seconds 00:11:20.579 [2024-11-29 11:55:25.856156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:11:20.579 [2024-11-29 11:55:25.856191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:11:20.579 11:55:25 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:11:20.579 00:11:20.579 00:11:20.579 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.579 http://cunit.sourceforge.net/ 00:11:20.579 00:11:20.579 00:11:20.579 Suite: nvme_ns_cmd 00:11:20.579 Test: nvme_poll_group_create_test ...passed 00:11:20.579 Test: nvme_poll_group_add_remove_test ...passed 00:11:20.579 Test: nvme_poll_group_process_completions ...passed 00:11:20.579 Test: nvme_poll_group_destroy_test ...passed 00:11:20.579 Test: nvme_poll_group_get_free_stats ...passed 00:11:20.579 00:11:20.579 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.579 suites 1 1 n/a 0 0 00:11:20.579 tests 5 5 5 0 0 00:11:20.579 asserts 75 75 75 0 n/a 00:11:20.579 00:11:20.579 Elapsed time = 0.000 seconds 00:11:20.579 11:55:25 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:11:20.579 00:11:20.579 00:11:20.579 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.579 http://cunit.sourceforge.net/ 00:11:20.579 00:11:20.579 00:11:20.579 Suite: nvme_quirks 00:11:20.579 Test: test_nvme_quirks_striping ...passed 00:11:20.579 00:11:20.579 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.579 suites 1 1 n/a 0 0 00:11:20.579 tests 1 1 1 0 0 00:11:20.579 asserts 5 5 5 0 n/a 00:11:20.579 00:11:20.579 Elapsed time = 0.000 seconds 00:11:20.579 11:55:25 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:11:20.579 00:11:20.579 00:11:20.579 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.579 http://cunit.sourceforge.net/ 00:11:20.579 00:11:20.579 00:11:20.579 Suite: nvme_tcp 00:11:20.579 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:11:20.579 Test: test_nvme_tcp_build_iovs ...passed 00:11:20.579 Test: test_nvme_tcp_build_sgl_request ...[2024-11-29 11:55:25.938497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffeae2c9d90, and the iovcnt=16, remaining_size=28672 00:11:20.579 passed 00:11:20.579 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:11:20.579 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:11:20.579 Test: test_nvme_tcp_req_complete_safe ...passed 00:11:20.579 Test: test_nvme_tcp_req_get ...passed 00:11:20.579 Test: test_nvme_tcp_req_init ...passed 00:11:20.580 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:11:20.580 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:11:20.580 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:11:20.580 Test: test_nvme_tcp_alloc_reqs ...[2024-11-29 11:55:25.939371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cbab0 is same with the state(6) to be set 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:11:20.580 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-29 11:55:25.939788] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cac40 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.939878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffeae2cb770 00:11:20.580 [2024-11-29 11:55:25.939947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:11:20.580 [2024-11-29 11:55:25.940052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:11:20.580 [2024-11-29 11:55:25.940230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:11:20.580 [2024-11-29 11:55:25.940339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.940658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cb100 is same with the state(5) to be set 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-29 11:55:25.940846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:11:20.580 [2024-11-29 11:55:25.940920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:11:20.580 [2024-11-29 11:55:25.941194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:11:20.580 Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-29 11:55:25.941354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffeae2cb2b0): PDU Sequence Error 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_icresp_handle ...[2024-11-29 11:55:25.941488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:11:20.580 [2024-11-29 11:55:25.941536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:11:20.580 [2024-11-29 11:55:25.941587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cac50 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.941643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:11:20.580 [2024-11-29 11:55:25.941693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cac50 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.941780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2cac50 is same with the state(0) to be set 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:11:20.580 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-29 11:55:25.941857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffeae2cb770): PDU Sequence Error 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-11-29 11:55:25.941949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffeae2c9f30 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-29 11:55:25.942104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffeae2c95b0, errno=0, rc=0 00:11:20.580 [2024-11-29 11:55:25.942156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2c95b0 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.942224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeae2c95b0 is same with the state(5) to be set 00:11:20.580 [2024-11-29 11:55:25.942289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffeae2c95b0 (0): Success 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-29 11:55:25.942334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffeae2c95b0 (0): Success 00:11:20.580 [2024-11-29 11:55:26.037878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:11:20.580 [2024-11-29 11:55:26.038007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:11:20.580 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:11:20.580 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-29 11:55:26.038209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:11:20.580 [2024-11-29 11:55:26.038257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:11:20.580 [2024-11-29 11:55:26.038459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:11:20.580 [2024-11-29 11:55:26.038501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:11:20.580 [2024-11-29 11:55:26.038597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:11:20.580 [2024-11-29 11:55:26.038650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:11:20.580 [2024-11-29 11:55:26.038739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:11:20.580 passed 00:11:20.580 Test: test_nvme_tcp_qpair_submit_request ...[2024-11-29 11:55:26.038813] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:11:20.580 [2024-11-29 11:55:26.038930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:11:20.580 [2024-11-29 11:55:26.038966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:11:20.580 passed 00:11:20.580 00:11:20.580 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.580 suites 1 1 n/a 0 0 00:11:20.580 tests 27 27 27 0 0 00:11:20.580 asserts 624 624 624 0 n/a 00:11:20.580 00:11:20.580 Elapsed time = 0.101 seconds 00:11:20.580 11:55:26 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:11:20.580 00:11:20.580 00:11:20.580 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.580 http://cunit.sourceforge.net/ 00:11:20.580 00:11:20.580 00:11:20.580 Suite: nvme_transport 00:11:20.580 Test: test_nvme_get_transport ...passed 00:11:20.580 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:11:20.580 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:11:20.580 Test: test_nvme_transport_poll_group_add_remove ...passed 00:11:20.580 Test: test_ctrlr_get_memory_domains ...passed 00:11:20.580 00:11:20.580 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.580 suites 1 1 n/a 0 0 00:11:20.580 tests 5 5 5 0 0 00:11:20.580 asserts 28 28 28 0 n/a 00:11:20.580 00:11:20.580 Elapsed time = 0.000 seconds 00:11:20.839 11:55:26 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:11:20.839 00:11:20.839 00:11:20.839 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.839 http://cunit.sourceforge.net/ 00:11:20.839 00:11:20.839 00:11:20.839 Suite: nvme_io_msg 00:11:20.839 Test: test_nvme_io_msg_send ...passed 00:11:20.839 Test: test_nvme_io_msg_process ...passed 00:11:20.839 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:11:20.839 00:11:20.839 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.839 suites 1 1 n/a 0 0 00:11:20.839 tests 3 3 3 0 0 00:11:20.839 asserts 56 56 56 0 n/a 00:11:20.839 00:11:20.839 Elapsed time = 0.000 seconds 00:11:20.839 11:55:26 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:11:20.839 00:11:20.839 00:11:20.839 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.839 http://cunit.sourceforge.net/ 00:11:20.839 00:11:20.839 00:11:20.839 Suite: nvme_pcie_common 00:11:20.839 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-11-29 11:55:26.129106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:11:20.839 passed 00:11:20.839 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:11:20.839 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:11:20.839 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-29 11:55:26.129964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:11:20.839 [2024-11-29 11:55:26.130097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:11:20.839 passed 00:11:20.839 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-11-29 11:55:26.130140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:11:20.839 passed 00:11:20.839 Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-29 11:55:26.130590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:11:20.839 [2024-11-29 11:55:26.130654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:11:20.839 passed 00:11:20.839 00:11:20.839 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.839 suites 1 1 n/a 0 0 00:11:20.839 tests 6 6 6 0 0 00:11:20.839 asserts 148 148 148 0 n/a 00:11:20.839 00:11:20.839 Elapsed time = 0.002 seconds 00:11:20.839 11:55:26 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:11:20.839 00:11:20.839 00:11:20.839 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.839 http://cunit.sourceforge.net/ 00:11:20.839 00:11:20.839 00:11:20.839 Suite: nvme_fabric 00:11:20.839 Test: test_nvme_fabric_prop_set_cmd ...passed 00:11:20.839 Test: test_nvme_fabric_prop_get_cmd ...passed 00:11:20.839 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:11:20.839 Test: test_nvme_fabric_discover_probe ...passed 00:11:20.839 Test: test_nvme_fabric_qpair_connect ...[2024-11-29 11:55:26.159408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:11:20.839 passed 00:11:20.839 00:11:20.839 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.839 suites 1 1 n/a 0 0 00:11:20.839 tests 5 5 5 0 0 00:11:20.839 asserts 60 60 60 0 n/a 00:11:20.839 00:11:20.839 Elapsed time = 0.001 seconds 00:11:20.839 11:55:26 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:11:20.839 00:11:20.839 00:11:20.839 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.839 http://cunit.sourceforge.net/ 00:11:20.839 00:11:20.839 00:11:20.839 Suite: nvme_opal 00:11:20.839 Test: test_opal_nvme_security_recv_send_done ...passed 00:11:20.839 Test: test_opal_add_short_atom_header ...[2024-11-29 11:55:26.188964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:11:20.839 passed 00:11:20.839 00:11:20.839 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.839 suites 1 1 n/a 0 0 00:11:20.839 tests 2 2 2 0 0 00:11:20.839 asserts 22 22 22 0 n/a 00:11:20.839 00:11:20.839 Elapsed time = 0.001 seconds 00:11:20.839 00:11:20.839 real 0m1.187s 00:11:20.839 user 0m0.627s 00:11:20.839 sys 0m0.413s 00:11:20.839 11:55:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:20.839 11:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 ************************************ 00:11:20.839 END TEST unittest_nvme 00:11:20.839 ************************************ 00:11:20.839 11:55:26 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:11:20.839 11:55:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:20.839 11:55:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.839 11:55:26 -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 ************************************ 00:11:20.839 START TEST unittest_log 00:11:20.839 ************************************ 00:11:20.839 11:55:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:11:20.839 00:11:20.839 00:11:20.839 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.839 http://cunit.sourceforge.net/ 00:11:20.839 00:11:20.839 00:11:20.839 Suite: log 00:11:20.839 Test: log_test ...[2024-11-29 11:55:26.265741] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:11:20.839 [2024-11-29 11:55:26.266077] log_ut.c: 55:log_test: *DEBUG*: log test 00:11:20.839 passed 00:11:20.839 Test: deprecation ...log dump test: 00:11:20.839 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:11:20.839 spdk dump test: 00:11:20.839 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:11:20.839 spdk dump test: 00:11:20.839 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:11:20.839 00000010 65 20 63 68 61 72 73 e chars 00:11:21.771 passed 00:11:21.771 00:11:21.771 Run Summary: Type Total Ran Passed Failed Inactive 00:11:21.771 suites 1 1 n/a 0 0 00:11:21.771 tests 2 2 2 0 0 00:11:21.771 asserts 73 73 73 0 n/a 00:11:21.771 00:11:21.771 Elapsed time = 0.001 seconds 00:11:21.771 00:11:21.771 real 0m1.026s 00:11:21.771 user 0m0.019s 00:11:21.771 sys 0m0.007s 00:11:21.771 11:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.771 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:21.771 ************************************ 00:11:21.772 END TEST unittest_log 00:11:21.772 ************************************ 00:11:22.091 11:55:27 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:11:22.091 11:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.091 11:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.091 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.091 ************************************ 00:11:22.091 START TEST unittest_lvol 00:11:22.091 ************************************ 00:11:22.091 11:55:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:11:22.091 00:11:22.091 00:11:22.091 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.091 http://cunit.sourceforge.net/ 00:11:22.091 00:11:22.091 00:11:22.091 Suite: lvol 00:11:22.091 Test: lvs_init_unload_success ...[2024-11-29 11:55:27.351494] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:11:22.091 passed 00:11:22.091 Test: lvs_init_destroy_success ...[2024-11-29 11:55:27.352541] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:11:22.091 passed 00:11:22.091 Test: lvs_init_opts_success ...passed 00:11:22.091 Test: lvs_unload_lvs_is_null_fail ...[2024-11-29 11:55:27.352936] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:11:22.091 passed 00:11:22.091 Test: lvs_names ...[2024-11-29 11:55:27.353170] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:11:22.091 [2024-11-29 11:55:27.353359] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:11:22.091 [2024-11-29 11:55:27.353682] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:11:22.091 passed 00:11:22.091 Test: lvol_create_destroy_success ...passed 00:11:22.091 Test: lvol_create_fail ...[2024-11-29 11:55:27.354507] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:11:22.091 [2024-11-29 11:55:27.354774] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:11:22.091 passed 00:11:22.091 Test: lvol_destroy_fail ...[2024-11-29 11:55:27.355267] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:11:22.091 passed 00:11:22.091 Test: lvol_close ...[2024-11-29 11:55:27.355653] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:11:22.091 [2024-11-29 11:55:27.355851] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:11:22.091 passed 00:11:22.091 Test: lvol_resize ...passed 00:11:22.091 Test: lvol_set_read_only ...passed 00:11:22.091 Test: test_lvs_load ...[2024-11-29 11:55:27.356796] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:11:22.091 [2024-11-29 11:55:27.356974] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:11:22.091 passed 00:11:22.091 Test: lvols_load ...[2024-11-29 11:55:27.357329] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:11:22.091 [2024-11-29 11:55:27.357603] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:11:22.091 passed 00:11:22.091 Test: lvol_open ...passed 00:11:22.091 Test: lvol_snapshot ...passed 00:11:22.091 Test: lvol_snapshot_fail ...[2024-11-29 11:55:27.358492] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:11:22.091 passed 00:11:22.091 Test: lvol_clone ...passed 00:11:22.091 Test: lvol_clone_fail ...[2024-11-29 11:55:27.359273] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:11:22.091 passed 00:11:22.091 Test: lvol_iter_clones ...passed 00:11:22.091 Test: lvol_refcnt ...[2024-11-29 11:55:27.359908] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol bc6280ae-78fa-4d70-95ae-b447d0f79211 because it is still open 00:11:22.091 passed 00:11:22.091 Test: lvol_names ...[2024-11-29 11:55:27.360264] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:11:22.091 [2024-11-29 11:55:27.360499] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:11:22.091 [2024-11-29 11:55:27.360877] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:11:22.091 passed 00:11:22.091 Test: lvol_create_thin_provisioned ...passed 00:11:22.091 Test: lvol_rename ...[2024-11-29 11:55:27.361575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:11:22.091 [2024-11-29 11:55:27.361828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:11:22.091 passed 00:11:22.091 Test: lvs_rename ...[2024-11-29 11:55:27.362238] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:11:22.091 passed 00:11:22.091 Test: lvol_inflate ...[2024-11-29 11:55:27.362674] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:11:22.091 passed 00:11:22.091 Test: lvol_decouple_parent ...[2024-11-29 11:55:27.363082] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:11:22.091 passed 00:11:22.091 Test: lvol_get_xattr ...passed 00:11:22.091 Test: lvol_esnap_reload ...passed 00:11:22.091 Test: lvol_esnap_create_bad_args ...[2024-11-29 11:55:27.363691] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:11:22.091 [2024-11-29 11:55:27.363864] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:11:22.091 [2024-11-29 11:55:27.364064] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:11:22.091 [2024-11-29 11:55:27.364341] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:11:22.091 [2024-11-29 11:55:27.364639] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:11:22.091 passed 00:11:22.091 Test: lvol_esnap_create_delete ...passed 00:11:22.091 Test: lvol_esnap_load_esnaps ...[2024-11-29 11:55:27.365102] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:11:22.091 passed 00:11:22.091 Test: lvol_esnap_missing ...[2024-11-29 11:55:27.365369] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:11:22.091 [2024-11-29 11:55:27.365540] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:11:22.091 passed 00:11:22.091 Test: lvol_esnap_hotplug ... 00:11:22.091 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:11:22.091 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:11:22.091 [2024-11-29 11:55:27.366497] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 25e50bb2-383b-4ecf-98a2-6a25d5fc2147: failed to create esnap bs_dev: error -12 00:11:22.091 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:11:22.091 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:11:22.091 [2024-11-29 11:55:27.366871] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 944650fa-20f4-41e0-930b-1e31d8dbdcb6: failed to create esnap bs_dev: error -12 00:11:22.091 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:11:22.091 [2024-11-29 11:55:27.367147] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1e89ccc0-dece-4584-a944-e56ace5cf9fd: failed to create esnap bs_dev: error -12 00:11:22.091 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:11:22.091 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:11:22.091 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:11:22.091 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:11:22.091 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:11:22.091 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:11:22.091 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:11:22.091 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:11:22.091 passed 00:11:22.091 Test: lvol_get_by ...passed 00:11:22.091 00:11:22.091 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.091 suites 1 1 n/a 0 0 00:11:22.091 tests 34 34 34 0 0 00:11:22.091 asserts 1439 1439 1439 0 n/a 00:11:22.091 00:11:22.091 Elapsed time = 0.013 seconds 00:11:22.091 00:11:22.091 real 0m0.052s 00:11:22.091 user 0m0.026s 00:11:22.091 sys 0m0.022s 00:11:22.091 11:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.091 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.091 ************************************ 00:11:22.091 END TEST unittest_lvol 00:11:22.091 ************************************ 00:11:22.091 11:55:27 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:22.091 11:55:27 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:11:22.091 11:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.091 11:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.091 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.091 ************************************ 00:11:22.091 START TEST unittest_nvme_rdma 00:11:22.091 ************************************ 00:11:22.091 11:55:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:11:22.091 00:11:22.091 00:11:22.091 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.091 http://cunit.sourceforge.net/ 00:11:22.091 00:11:22.091 00:11:22.091 Suite: nvme_rdma 00:11:22.091 Test: test_nvme_rdma_build_sgl_request ...[2024-11-29 11:55:27.443119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:11:22.091 [2024-11-29 11:55:27.443508] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:11:22.092 [2024-11-29 11:55:27.443670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:11:22.092 Test: test_nvme_rdma_build_contig_request ...[2024-11-29 11:55:27.443799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:11:22.092 Test: test_nvme_rdma_create_reqs ...[2024-11-29 11:55:27.444004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_create_rsps ...[2024-11-29 11:55:27.444478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-11-29 11:55:27.444732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_poller_create ...[2024-11-29 11:55:27.444846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-11-29 11:55:27.445128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_ctrlr_construct ...passed 00:11:22.092 Test: test_nvme_rdma_req_put_and_get ...passed 00:11:22.092 Test: test_nvme_rdma_req_init ...passed 00:11:22.092 Test: test_nvme_rdma_validate_cm_event ...[2024-11-29 11:55:27.445618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_qpair_init ...passed 00:11:22.092 Test: test_nvme_rdma_qpair_submit_request ...[2024-11-29 11:55:27.445697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_memory_domain ...passed 00:11:22.092 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:11:22.092 Test: test_rdma_get_memory_translation ...[2024-11-29 11:55:27.445992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:11:22.092 passed 00:11:22.092 Test: test_get_rdma_qpair_from_wc ...[2024-11-29 11:55:27.446122] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:11:22.092 [2024-11-29 11:55:27.446211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:11:22.092 Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-29 11:55:27.446398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:11:22.092 passed 00:11:22.092 Test: test_nvme_rdma_qpair_set_poller ...[2024-11-29 11:55:27.446489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:11:22.092 [2024-11-29 11:55:27.446652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:11:22.092 [2024-11-29 11:55:27.446724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:11:22.092 [2024-11-29 11:55:27.446794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff0c7bd700 on poll group 0x60b0000001a0 00:11:22.092 [2024-11-29 11:55:27.446894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:11:22.092 [2024-11-29 11:55:27.446957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:11:22.092 [2024-11-29 11:55:27.447030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff0c7bd700 on poll group 0x60b0000001a0 00:11:22.092 [2024-11-29 11:55:27.447145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:11:22.092 passed 00:11:22.092 00:11:22.092 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.092 suites 1 1 n/a 0 0 00:11:22.092 tests 22 22 22 0 0 00:11:22.092 asserts 412 412 412 0 n/a 00:11:22.092 00:11:22.092 Elapsed time = 0.004 seconds 00:11:22.092 00:11:22.092 real 0m0.037s 00:11:22.092 user 0m0.012s 00:11:22.092 sys 0m0.025s 00:11:22.092 11:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.092 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 ************************************ 00:11:22.092 END TEST unittest_nvme_rdma 00:11:22.092 ************************************ 00:11:22.092 11:55:27 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:11:22.092 11:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.092 11:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.092 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 ************************************ 00:11:22.092 START TEST unittest_nvmf_transport 00:11:22.092 ************************************ 00:11:22.092 11:55:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:11:22.092 00:11:22.092 00:11:22.092 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.092 http://cunit.sourceforge.net/ 00:11:22.092 00:11:22.092 00:11:22.092 Suite: nvmf 00:11:22.092 Test: test_spdk_nvmf_transport_create ...[2024-11-29 11:55:27.530062] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:11:22.092 [2024-11-29 11:55:27.530628] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:11:22.092 [2024-11-29 11:55:27.530730] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:11:22.092 [2024-11-29 11:55:27.530887] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:11:22.092 passed 00:11:22.092 Test: test_nvmf_transport_poll_group_create ...passed 00:11:22.092 Test: test_spdk_nvmf_transport_opts_init ...[2024-11-29 11:55:27.531269] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:11:22.092 [2024-11-29 11:55:27.531406] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:11:22.092 passed 00:11:22.092 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:11:22.092 00:11:22.092 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.092 suites 1 1 n/a 0 0 00:11:22.092 tests 4 4 4 0 0 00:11:22.092 asserts 49 49 49 0 n/a 00:11:22.092 00:11:22.092 Elapsed time = 0.002 seconds 00:11:22.092 [2024-11-29 11:55:27.531470] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:11:22.092 00:11:22.092 real 0m0.042s 00:11:22.092 user 0m0.029s 00:11:22.092 sys 0m0.013s 00:11:22.092 11:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.092 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 ************************************ 00:11:22.092 END TEST unittest_nvmf_transport 00:11:22.092 ************************************ 00:11:22.351 11:55:27 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:11:22.351 11:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.351 11:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.351 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 ************************************ 00:11:22.351 START TEST unittest_rdma 00:11:22.351 ************************************ 00:11:22.351 11:55:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:11:22.351 00:11:22.351 00:11:22.351 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.351 http://cunit.sourceforge.net/ 00:11:22.351 00:11:22.351 00:11:22.351 Suite: rdma_common 00:11:22.351 Test: test_spdk_rdma_pd ...[2024-11-29 11:55:27.612472] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:11:22.351 [2024-11-29 11:55:27.613033] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:11:22.351 passed 00:11:22.351 00:11:22.351 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.351 suites 1 1 n/a 0 0 00:11:22.351 tests 1 1 1 0 0 00:11:22.351 asserts 31 31 31 0 n/a 00:11:22.351 00:11:22.351 Elapsed time = 0.001 seconds 00:11:22.351 00:11:22.351 real 0m0.032s 00:11:22.351 user 0m0.019s 00:11:22.351 sys 0m0.014s 00:11:22.351 11:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.351 ************************************ 00:11:22.351 END TEST unittest_rdma 00:11:22.351 ************************************ 00:11:22.351 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 11:55:27 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:22.351 11:55:27 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:11:22.351 11:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.351 11:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.351 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 ************************************ 00:11:22.351 START TEST unittest_nvme_cuse 00:11:22.351 ************************************ 00:11:22.351 11:55:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:11:22.351 00:11:22.351 00:11:22.351 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.351 http://cunit.sourceforge.net/ 00:11:22.351 00:11:22.351 00:11:22.351 Suite: nvme_cuse 00:11:22.351 Test: test_cuse_nvme_submit_io_read_write ...passed 00:11:22.351 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:11:22.351 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:11:22.351 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:11:22.351 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:11:22.351 Test: test_cuse_nvme_submit_io ...[2024-11-29 11:55:27.697294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:11:22.351 passed 00:11:22.351 Test: test_cuse_nvme_reset ...[2024-11-29 11:55:27.697605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:11:22.351 passed 00:11:22.351 Test: test_nvme_cuse_stop ...passed 00:11:22.351 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:11:22.351 00:11:22.351 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.351 suites 1 1 n/a 0 0 00:11:22.351 tests 9 9 9 0 0 00:11:22.351 asserts 121 121 121 0 n/a 00:11:22.351 00:11:22.351 Elapsed time = 0.001 seconds 00:11:22.351 00:11:22.351 real 0m0.034s 00:11:22.351 user 0m0.023s 00:11:22.351 sys 0m0.011s 00:11:22.351 11:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.351 ************************************ 00:11:22.351 END TEST unittest_nvme_cuse 00:11:22.351 ************************************ 00:11:22.351 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 11:55:27 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:11:22.351 11:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.351 11:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.351 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 ************************************ 00:11:22.351 START TEST unittest_nvmf 00:11:22.351 ************************************ 00:11:22.351 11:55:27 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:11:22.351 11:55:27 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:11:22.351 00:11:22.351 00:11:22.351 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.351 http://cunit.sourceforge.net/ 00:11:22.351 00:11:22.351 00:11:22.351 Suite: nvmf 00:11:22.351 Test: test_get_log_page ...[2024-11-29 11:55:27.778466] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:11:22.351 passed 00:11:22.351 Test: test_process_fabrics_cmd ...passed 00:11:22.351 Test: test_connect ...[2024-11-29 11:55:27.779598] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:11:22.351 [2024-11-29 11:55:27.779740] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:11:22.351 [2024-11-29 11:55:27.779832] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:11:22.351 [2024-11-29 11:55:27.779904] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:11:22.351 [2024-11-29 11:55:27.780047] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:11:22.351 [2024-11-29 11:55:27.780140] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:11:22.351 [2024-11-29 11:55:27.780333] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:11:22.351 [2024-11-29 11:55:27.780407] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:11:22.351 [2024-11-29 11:55:27.780572] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:11:22.351 [2024-11-29 11:55:27.780689] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:11:22.351 [2024-11-29 11:55:27.781060] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:11:22.351 [2024-11-29 11:55:27.781174] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:11:22.351 [2024-11-29 11:55:27.781315] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:11:22.351 [2024-11-29 11:55:27.781425] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:11:22.351 [2024-11-29 11:55:27.781591] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:11:22.351 [2024-11-29 11:55:27.781816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:11:22.351 passed 00:11:22.351 Test: test_get_ns_id_desc_list ...passed 00:11:22.351 Test: test_identify_ns ...[2024-11-29 11:55:27.782216] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:22.351 [2024-11-29 11:55:27.782512] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:11:22.351 [2024-11-29 11:55:27.782699] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:22.351 passed 00:11:22.351 Test: test_identify_ns_iocs_specific ...[2024-11-29 11:55:27.782902] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:22.351 [2024-11-29 11:55:27.783256] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:22.351 passed 00:11:22.351 Test: test_reservation_write_exclusive ...passed 00:11:22.351 Test: test_reservation_exclusive_access ...passed 00:11:22.351 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:11:22.351 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:11:22.351 Test: test_reservation_notification_log_page ...passed 00:11:22.351 Test: test_get_dif_ctx ...passed 00:11:22.351 Test: test_set_get_features ...[2024-11-29 11:55:27.783847] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:11:22.351 [2024-11-29 11:55:27.783912] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:11:22.351 [2024-11-29 11:55:27.783985] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:11:22.351 [2024-11-29 11:55:27.784074] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:11:22.351 passed 00:11:22.351 Test: test_identify_ctrlr ...passed 00:11:22.351 Test: test_identify_ctrlr_iocs_specific ...passed 00:11:22.351 Test: test_custom_admin_cmd ...passed 00:11:22.351 Test: test_fused_compare_and_write ...[2024-11-29 11:55:27.784630] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:11:22.351 [2024-11-29 11:55:27.784699] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:11:22.351 passed 00:11:22.351 Test: test_multi_async_event_reqs ...passed 00:11:22.351 Test: test_get_ana_log_page_one_ns_per_anagrp ...[2024-11-29 11:55:27.784770] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:11:22.351 passed 00:11:22.351 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:11:22.351 Test: test_multi_async_events ...passed 00:11:22.351 Test: test_rae ...passed 00:11:22.351 Test: test_nvmf_ctrlr_create_destruct ...passed 00:11:22.351 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:11:22.351 Test: test_spdk_nvmf_request_zcopy_start ...[2024-11-29 11:55:27.785459] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:11:22.351 passed 00:11:22.351 Test: test_zcopy_read ...passed 00:11:22.351 Test: test_zcopy_write ...passed 00:11:22.351 Test: test_nvmf_property_set ...passed 00:11:22.351 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-11-29 11:55:27.785687] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:11:22.351 [2024-11-29 11:55:27.785809] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:11:22.351 passed 00:11:22.351 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-29 11:55:27.785889] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:11:22.351 [2024-11-29 11:55:27.785958] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:11:22.351 passed 00:11:22.351 00:11:22.351 [2024-11-29 11:55:27.786028] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:11:22.351 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.351 suites 1 1 n/a 0 0 00:11:22.351 tests 30 30 30 0 0 00:11:22.352 asserts 885 885 885 0 n/a 00:11:22.352 00:11:22.352 Elapsed time = 0.008 seconds 00:11:22.352 11:55:27 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:11:22.352 00:11:22.352 00:11:22.352 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.352 http://cunit.sourceforge.net/ 00:11:22.352 00:11:22.352 00:11:22.352 Suite: nvmf 00:11:22.352 Test: test_get_rw_params ...passed 00:11:22.352 Test: test_lba_in_range ...passed 00:11:22.352 Test: test_get_dif_ctx ...passed 00:11:22.352 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:11:22.352 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-11-29 11:55:27.823737] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:11:22.352 [2024-11-29 11:55:27.824061] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:11:22.352 passed 00:11:22.352 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-29 11:55:27.824172] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:11:22.352 [2024-11-29 11:55:27.824235] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:11:22.352 passed 00:11:22.352 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-11-29 11:55:27.824323] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:11:22.352 [2024-11-29 11:55:27.824435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:11:22.352 [2024-11-29 11:55:27.824486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:11:22.352 passed 00:11:22.352 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:11:22.352 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:11:22.352 00:11:22.352 [2024-11-29 11:55:27.824568] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:11:22.352 [2024-11-29 11:55:27.824609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:11:22.352 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.352 suites 1 1 n/a 0 0 00:11:22.352 tests 9 9 9 0 0 00:11:22.352 asserts 157 157 157 0 n/a 00:11:22.352 00:11:22.352 Elapsed time = 0.001 seconds 00:11:22.352 11:55:27 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:11:22.352 00:11:22.352 00:11:22.352 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.352 http://cunit.sourceforge.net/ 00:11:22.352 00:11:22.352 00:11:22.352 Suite: nvmf 00:11:22.352 Test: test_discovery_log ...passed 00:11:22.352 Test: test_discovery_log_with_filters ...passed 00:11:22.352 00:11:22.352 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.352 suites 1 1 n/a 0 0 00:11:22.352 tests 2 2 2 0 0 00:11:22.352 asserts 238 238 238 0 n/a 00:11:22.352 00:11:22.352 Elapsed time = 0.002 seconds 00:11:22.611 11:55:27 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:11:22.611 00:11:22.611 00:11:22.611 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.611 http://cunit.sourceforge.net/ 00:11:22.611 00:11:22.611 00:11:22.611 Suite: nvmf 00:11:22.611 Test: nvmf_test_create_subsystem ...[2024-11-29 11:55:27.897909] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:11:22.611 [2024-11-29 11:55:27.898365] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:11:22.611 [2024-11-29 11:55:27.898529] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:11:22.611 [2024-11-29 11:55:27.898599] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:11:22.611 [2024-11-29 11:55:27.898658] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:11:22.611 [2024-11-29 11:55:27.898735] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:11:22.611 [2024-11-29 11:55:27.898891] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:11:22.611 [2024-11-29 11:55:27.899165] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:11:22.611 [2024-11-29 11:55:27.899338] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:11:22.611 passed 00:11:22.611 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-29 11:55:27.899434] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:11:22.611 [2024-11-29 11:55:27.899493] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:11:22.611 [2024-11-29 11:55:27.899786] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:11:22.611 passed 00:11:22.611 Test: test_spdk_nvmf_subsystem_set_sn ...[2024-11-29 11:55:27.899978] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:11:22.611 passed 00:11:22.611 Test: test_reservation_register ...[2024-11-29 11:55:27.900335] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.611 passed 00:11:22.611 Test: test_reservation_register_with_ptpl ...[2024-11-29 11:55:27.900507] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:11:22.611 passed 00:11:22.611 Test: test_reservation_acquire_preempt_1 ...[2024-11-29 11:55:27.901628] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.611 passed 00:11:22.611 Test: test_reservation_acquire_release_with_ptpl ...passed 00:11:22.611 Test: test_reservation_release ...[2024-11-29 11:55:27.903545] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.611 passed 00:11:22.611 Test: test_reservation_unregister_notification ...[2024-11-29 11:55:27.903837] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.611 passed 00:11:22.611 Test: test_reservation_release_notification ...[2024-11-29 11:55:27.904121] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.611 passed 00:11:22.612 Test: test_reservation_release_notification_write_exclusive ...[2024-11-29 11:55:27.904404] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.612 passed 00:11:22.612 Test: test_reservation_clear_notification ...[2024-11-29 11:55:27.904724] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.612 passed 00:11:22.612 Test: test_reservation_preempt_notification ...[2024-11-29 11:55:27.904982] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:11:22.612 passed 00:11:22.612 Test: test_spdk_nvmf_ns_event ...passed 00:11:22.612 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:11:22.612 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:11:22.612 Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-29 11:55:27.905816] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_ns_reservation_report ...[2024-11-29 11:55:27.905939] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_nqn_is_valid ...[2024-11-29 11:55:27.906139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:11:22.612 [2024-11-29 11:55:27.906268] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:11:22.612 [2024-11-29 11:55:27.906362] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:c4986775-4ce3-4074-b23a-a563a53c782": uuid is not the correct length 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_ns_reservation_restore ...[2024-11-29 11:55:27.906416] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:11:22.612 [2024-11-29 11:55:27.906612] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_subsystem_state_change ...passed 00:11:22.612 Test: test_nvmf_reservation_custom_ops ...passed 00:11:22.612 00:11:22.612 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.612 suites 1 1 n/a 0 0 00:11:22.612 tests 22 22 22 0 0 00:11:22.612 asserts 407 407 407 0 n/a 00:11:22.612 00:11:22.612 Elapsed time = 0.010 seconds 00:11:22.612 11:55:27 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:11:22.612 00:11:22.612 00:11:22.612 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.612 http://cunit.sourceforge.net/ 00:11:22.612 00:11:22.612 00:11:22.612 Suite: nvmf 00:11:22.612 Test: test_nvmf_tcp_create ...[2024-11-29 11:55:27.961673] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_destroy ...passed 00:11:22.612 Test: test_nvmf_tcp_poll_group_create ...passed 00:11:22.612 Test: test_nvmf_tcp_send_c2h_data ...passed 00:11:22.612 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:11:22.612 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:11:22.612 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:11:22.612 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-29 11:55:28.051345] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-11-29 11:55:28.051439] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.051528] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.051586] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.051633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_icreq_handle ...[2024-11-29 11:55:28.051758] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:11:22.612 [2024-11-29 11:55:28.051857] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.051937] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.051984] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:11:22.612 [2024-11-29 11:55:28.052040] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_check_xfer_type ...[2024-11-29 11:55:28.052078] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.052115] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.052172] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.052246] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_invalid_sgl ...[2024-11-29 11:55:28.052328] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-29 11:55:28.052384] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.052433] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba7e70 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.052494] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc9cba8bd0 00:11:22.612 [2024-11-29 11:55:28.052584] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.052670] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.052730] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc9cba8330 00:11:22.612 [2024-11-29 11:55:28.052774] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.052814] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.052861] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:11:22.612 [2024-11-29 11:55:28.052924] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.052980] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.053029] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:11:22.612 [2024-11-29 11:55:28.053071] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053113] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.053174] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053219] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.053294] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053353] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.053416] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053448] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.053504] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 [2024-11-29 11:55:28.053623] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053667] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-29 11:55:28.053749] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:11:22.612 [2024-11-29 11:55:28.053783] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9cba8330 is same with the state(5) to be set 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-29 11:55:28.075316] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-29 11:55:28.075439] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:11:22.612 [2024-11-29 11:55:28.075926] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:11:22.612 passed 00:11:22.612 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-11-29 11:55:28.076009] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:11:22.612 [2024-11-29 11:55:28.076279] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:11:22.613 [2024-11-29 11:55:28.076341] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:11:22.613 passed 00:11:22.613 00:11:22.613 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.613 suites 1 1 n/a 0 0 00:11:22.613 tests 17 17 17 0 0 00:11:22.613 asserts 222 222 222 0 n/a 00:11:22.613 00:11:22.613 Elapsed time = 0.134 seconds 00:11:22.869 11:55:28 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:11:22.869 00:11:22.869 00:11:22.869 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.869 http://cunit.sourceforge.net/ 00:11:22.869 00:11:22.869 00:11:22.869 Suite: nvmf 00:11:22.869 Test: test_nvmf_tgt_create_poll_group ...passed 00:11:22.869 00:11:22.869 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.869 suites 1 1 n/a 0 0 00:11:22.869 tests 1 1 1 0 0 00:11:22.869 asserts 17 17 17 0 n/a 00:11:22.869 00:11:22.869 Elapsed time = 0.024 seconds 00:11:22.869 00:11:22.869 real 0m0.475s 00:11:22.869 user 0m0.236s 00:11:22.869 sys 0m0.241s 00:11:22.869 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.869 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.869 ************************************ 00:11:22.869 END TEST unittest_nvmf 00:11:22.869 ************************************ 00:11:22.869 11:55:28 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:22.869 11:55:28 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:22.869 11:55:28 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:11:22.869 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.869 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.869 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.869 ************************************ 00:11:22.869 START TEST unittest_nvmf_rdma 00:11:22.869 ************************************ 00:11:22.869 11:55:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:11:22.869 00:11:22.869 00:11:22.869 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.869 http://cunit.sourceforge.net/ 00:11:22.869 00:11:22.869 00:11:22.869 Suite: nvmf 00:11:22.869 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-29 11:55:28.311806] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:11:22.869 [2024-11-29 11:55:28.312181] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:11:22.869 [2024-11-29 11:55:28.312246] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:11:22.869 passed 00:11:22.869 Test: test_spdk_nvmf_rdma_request_process ...passed 00:11:22.869 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:11:22.869 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:11:22.869 Test: test_nvmf_rdma_opts_init ...passed 00:11:22.869 Test: test_nvmf_rdma_request_free_data ...passed 00:11:22.869 Test: test_nvmf_rdma_update_ibv_state ...[2024-11-29 11:55:28.313551] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:11:22.869 [2024-11-29 11:55:28.313616] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:11:22.869 passed 00:11:22.869 Test: test_nvmf_rdma_resources_create ...passed 00:11:22.869 Test: test_nvmf_rdma_qpair_compare ...passed 00:11:22.869 Test: test_nvmf_rdma_resize_cq ...[2024-11-29 11:55:28.315093] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:11:22.869 Using CQ of insufficient size may lead to CQ overrun 00:11:22.869 [2024-11-29 11:55:28.315228] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:11:22.869 [2024-11-29 11:55:28.315300] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:11:22.869 passed 00:11:22.869 00:11:22.869 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.869 suites 1 1 n/a 0 0 00:11:22.869 tests 10 10 10 0 0 00:11:22.869 asserts 584 584 584 0 n/a 00:11:22.870 00:11:22.870 Elapsed time = 0.004 seconds 00:11:22.870 00:11:22.870 real 0m0.040s 00:11:22.870 user 0m0.020s 00:11:22.870 sys 0m0.020s 00:11:22.870 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.870 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.870 ************************************ 00:11:22.870 END TEST unittest_nvmf_rdma 00:11:22.870 ************************************ 00:11:22.870 11:55:28 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:22.870 11:55:28 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:11:22.870 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.870 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.870 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.130 ************************************ 00:11:23.130 START TEST unittest_scsi 00:11:23.130 ************************************ 00:11:23.130 11:55:28 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:11:23.130 11:55:28 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:11:23.130 00:11:23.130 00:11:23.130 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.130 http://cunit.sourceforge.net/ 00:11:23.130 00:11:23.130 00:11:23.130 Suite: dev_suite 00:11:23.130 Test: dev_destruct_null_dev ...passed 00:11:23.130 Test: dev_destruct_zero_luns ...passed 00:11:23.130 Test: dev_destruct_null_lun ...passed 00:11:23.130 Test: dev_destruct_success ...passed 00:11:23.130 Test: dev_construct_num_luns_zero ...[2024-11-29 11:55:28.398620] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:11:23.130 passed 00:11:23.130 Test: dev_construct_no_lun_zero ...passed 00:11:23.130 Test: dev_construct_null_lun ...[2024-11-29 11:55:28.399035] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:11:23.130 passed 00:11:23.130 Test: dev_construct_name_too_long ...[2024-11-29 11:55:28.399102] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:11:23.130 [2024-11-29 11:55:28.399164] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:11:23.130 passed 00:11:23.130 Test: dev_construct_success ...passed 00:11:23.130 Test: dev_construct_success_lun_zero_not_first ...passed 00:11:23.130 Test: dev_queue_mgmt_task_success ...passed 00:11:23.130 Test: dev_queue_task_success ...passed 00:11:23.130 Test: dev_stop_success ...passed 00:11:23.130 Test: dev_add_port_max_ports ...[2024-11-29 11:55:28.399571] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:11:23.130 passed 00:11:23.130 Test: dev_add_port_construct_failure1 ...passed 00:11:23.130 Test: dev_add_port_construct_failure2 ...[2024-11-29 11:55:28.399701] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:11:23.130 [2024-11-29 11:55:28.399813] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:11:23.130 passed 00:11:23.130 Test: dev_add_port_success1 ...passed 00:11:23.130 Test: dev_add_port_success2 ...passed 00:11:23.130 Test: dev_add_port_success3 ...passed 00:11:23.130 Test: dev_find_port_by_id_num_ports_zero ...passed 00:11:23.130 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:11:23.130 Test: dev_find_port_by_id_success ...passed 00:11:23.130 Test: dev_add_lun_bdev_not_found ...passed 00:11:23.130 Test: dev_add_lun_no_free_lun_id ...[2024-11-29 11:55:28.400340] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:11:23.130 passed 00:11:23.130 Test: dev_add_lun_success1 ...passed 00:11:23.130 Test: dev_add_lun_success2 ...passed 00:11:23.130 Test: dev_check_pending_tasks ...passed 00:11:23.130 Test: dev_iterate_luns ...passed 00:11:23.130 Test: dev_find_free_lun ...passed 00:11:23.130 00:11:23.130 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.130 suites 1 1 n/a 0 0 00:11:23.130 tests 29 29 29 0 0 00:11:23.130 asserts 97 97 97 0 n/a 00:11:23.130 00:11:23.130 Elapsed time = 0.003 seconds 00:11:23.130 11:55:28 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:11:23.130 00:11:23.130 00:11:23.130 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.130 http://cunit.sourceforge.net/ 00:11:23.130 00:11:23.130 00:11:23.130 Suite: lun_suite 00:11:23.130 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-11-29 11:55:28.436668] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:11:23.130 passed 00:11:23.130 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-29 11:55:28.437214] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:11:23.130 passed 00:11:23.130 Test: lun_task_mgmt_execute_lun_reset ...passed 00:11:23.130 Test: lun_task_mgmt_execute_target_reset ...passed 00:11:23.130 Test: lun_task_mgmt_execute_invalid_case ...passed 00:11:23.130 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:11:23.130 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:11:23.130 Test: lun_append_task_null_lun_not_supported ...passed 00:11:23.130 Test: lun_execute_scsi_task_pending ...[2024-11-29 11:55:28.437465] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:11:23.130 passed 00:11:23.130 Test: lun_execute_scsi_task_complete ...passed 00:11:23.130 Test: lun_execute_scsi_task_resize ...passed 00:11:23.130 Test: lun_destruct_success ...passed 00:11:23.130 Test: lun_construct_null_ctx ...passed 00:11:23.130 Test: lun_construct_success ...passed 00:11:23.130 Test: lun_reset_task_wait_scsi_task_complete ...[2024-11-29 11:55:28.437786] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:11:23.130 passed 00:11:23.130 Test: lun_reset_task_suspend_scsi_task ...passed 00:11:23.130 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:11:23.130 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:11:23.130 00:11:23.130 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.130 suites 1 1 n/a 0 0 00:11:23.130 tests 18 18 18 0 0 00:11:23.130 asserts 153 153 153 0 n/a 00:11:23.130 00:11:23.130 Elapsed time = 0.002 seconds 00:11:23.130 11:55:28 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:11:23.130 00:11:23.130 00:11:23.130 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.130 http://cunit.sourceforge.net/ 00:11:23.130 00:11:23.130 00:11:23.130 Suite: scsi_suite 00:11:23.130 Test: scsi_init ...passed 00:11:23.130 00:11:23.130 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.130 suites 1 1 n/a 0 0 00:11:23.130 tests 1 1 1 0 0 00:11:23.130 asserts 1 1 1 0 n/a 00:11:23.130 00:11:23.130 Elapsed time = 0.000 seconds 00:11:23.130 11:55:28 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:11:23.130 00:11:23.130 00:11:23.130 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.130 http://cunit.sourceforge.net/ 00:11:23.130 00:11:23.130 00:11:23.130 Suite: translation_suite 00:11:23.130 Test: mode_select_6_test ...passed 00:11:23.130 Test: mode_select_6_test2 ...passed 00:11:23.130 Test: mode_sense_6_test ...passed 00:11:23.130 Test: mode_sense_10_test ...passed 00:11:23.130 Test: inquiry_evpd_test ...passed 00:11:23.130 Test: inquiry_standard_test ...passed 00:11:23.130 Test: inquiry_overflow_test ...passed 00:11:23.130 Test: task_complete_test ...passed 00:11:23.130 Test: lba_range_test ...passed 00:11:23.130 Test: xfer_len_test ...[2024-11-29 11:55:28.498757] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:11:23.130 passed 00:11:23.130 Test: xfer_test ...passed 00:11:23.130 Test: scsi_name_padding_test ...passed 00:11:23.130 Test: get_dif_ctx_test ...passed 00:11:23.130 Test: unmap_split_test ...passed 00:11:23.130 00:11:23.130 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.130 suites 1 1 n/a 0 0 00:11:23.130 tests 14 14 14 0 0 00:11:23.130 asserts 1200 1200 1200 0 n/a 00:11:23.130 00:11:23.130 Elapsed time = 0.003 seconds 00:11:23.130 11:55:28 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:11:23.130 00:11:23.130 00:11:23.130 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.130 http://cunit.sourceforge.net/ 00:11:23.130 00:11:23.130 00:11:23.130 Suite: reservation_suite 00:11:23.130 Test: test_reservation_register ...[2024-11-29 11:55:28.526568] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:11:23.130 passed 00:11:23.130 Test: test_reservation_reserve ...[2024-11-29 11:55:28.527025] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:11:23.130 [2024-11-29 11:55:28.527126] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:11:23.130 [2024-11-29 11:55:28.527253] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:11:23.130 passed 00:11:23.130 Test: test_reservation_preempt_non_all_regs ...[2024-11-29 11:55:28.527354] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:11:23.130 [2024-11-29 11:55:28.527460] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:11:23.130 passed 00:11:23.130 Test: test_reservation_preempt_all_regs ...[2024-11-29 11:55:28.527649] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:11:23.130 passed 00:11:23.130 Test: test_reservation_cmds_conflict ...[2024-11-29 11:55:28.527806] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:11:23.130 [2024-11-29 11:55:28.527905] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:11:23.130 [2024-11-29 11:55:28.527972] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:11:23.130 [2024-11-29 11:55:28.528016] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:11:23.130 [2024-11-29 11:55:28.528068] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:11:23.130 passed 00:11:23.130 Test: test_scsi2_reserve_release ...passed 00:11:23.130 Test: test_pr_with_scsi2_reserve_release ...[2024-11-29 11:55:28.528121] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:11:23.130 [2024-11-29 11:55:28.528264] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:11:23.130 passed 00:11:23.130 00:11:23.130 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.130 suites 1 1 n/a 0 0 00:11:23.130 tests 7 7 7 0 0 00:11:23.130 asserts 257 257 257 0 n/a 00:11:23.131 00:11:23.131 Elapsed time = 0.002 seconds 00:11:23.131 00:11:23.131 real 0m0.160s 00:11:23.131 user 0m0.085s 00:11:23.131 sys 0m0.076s 00:11:23.131 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.131 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.131 ************************************ 00:11:23.131 END TEST unittest_scsi 00:11:23.131 ************************************ 00:11:23.131 11:55:28 -- unit/unittest.sh@252 -- # uname -s 00:11:23.131 11:55:28 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:11:23.131 11:55:28 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:11:23.131 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.131 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.131 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.131 ************************************ 00:11:23.131 START TEST unittest_sock 00:11:23.131 ************************************ 00:11:23.131 11:55:28 -- common/autotest_common.sh@1114 -- # unittest_sock 00:11:23.131 11:55:28 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:11:23.131 00:11:23.131 00:11:23.131 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.131 http://cunit.sourceforge.net/ 00:11:23.131 00:11:23.131 00:11:23.131 Suite: sock 00:11:23.131 Test: posix_sock ...passed 00:11:23.131 Test: ut_sock ...passed 00:11:23.131 Test: posix_sock_group ...passed 00:11:23.131 Test: ut_sock_group ...passed 00:11:23.388 Test: posix_sock_group_fairness ...passed 00:11:23.388 Test: _posix_sock_close ...passed 00:11:23.388 Test: sock_get_default_opts ...passed 00:11:23.388 Test: ut_sock_impl_get_set_opts ...passed 00:11:23.388 Test: posix_sock_impl_get_set_opts ...passed 00:11:23.388 Test: ut_sock_map ...passed 00:11:23.388 Test: override_impl_opts ...passed 00:11:23.388 Test: ut_sock_group_get_ctx ...passed 00:11:23.388 00:11:23.388 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.388 suites 1 1 n/a 0 0 00:11:23.388 tests 12 12 12 0 0 00:11:23.388 asserts 349 349 349 0 n/a 00:11:23.388 00:11:23.388 Elapsed time = 0.007 seconds 00:11:23.388 11:55:28 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:11:23.388 00:11:23.388 00:11:23.388 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.388 http://cunit.sourceforge.net/ 00:11:23.388 00:11:23.388 00:11:23.388 Suite: posix 00:11:23.388 Test: flush ...passed 00:11:23.388 00:11:23.388 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.388 suites 1 1 n/a 0 0 00:11:23.388 tests 1 1 1 0 0 00:11:23.388 asserts 28 28 28 0 n/a 00:11:23.388 00:11:23.388 Elapsed time = 0.000 seconds 00:11:23.388 11:55:28 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:23.388 00:11:23.388 real 0m0.091s 00:11:23.388 user 0m0.035s 00:11:23.388 sys 0m0.033s 00:11:23.388 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.388 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 ************************************ 00:11:23.388 END TEST unittest_sock 00:11:23.388 ************************************ 00:11:23.388 11:55:28 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:11:23.388 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.388 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.388 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 ************************************ 00:11:23.388 START TEST unittest_thread 00:11:23.388 ************************************ 00:11:23.388 11:55:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:11:23.388 00:11:23.388 00:11:23.388 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.388 http://cunit.sourceforge.net/ 00:11:23.388 00:11:23.388 00:11:23.388 Suite: io_channel 00:11:23.388 Test: thread_alloc ...passed 00:11:23.388 Test: thread_send_msg ...passed 00:11:23.388 Test: thread_poller ...passed 00:11:23.388 Test: poller_pause ...passed 00:11:23.388 Test: thread_for_each ...passed 00:11:23.388 Test: for_each_channel_remove ...passed 00:11:23.388 Test: for_each_channel_unreg ...[2024-11-29 11:55:28.772745] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x7fff88486690 already registered (old:0x613000000200 new:0x6130000003c0) 00:11:23.388 passed 00:11:23.388 Test: thread_name ...passed 00:11:23.388 Test: channel ...[2024-11-29 11:55:28.776896] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x5616624ba0e0 00:11:23.388 passed 00:11:23.388 Test: channel_destroy_races ...passed 00:11:23.388 Test: thread_exit_test ...[2024-11-29 11:55:28.782075] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:11:23.388 passed 00:11:23.388 Test: thread_update_stats_test ...passed 00:11:23.388 Test: nested_channel ...passed 00:11:23.388 Test: device_unregister_and_thread_exit_race ...passed 00:11:23.388 Test: cache_closest_timed_poller ...passed 00:11:23.388 Test: multi_timed_pollers_have_same_expiration ...passed 00:11:23.388 Test: io_device_lookup ...passed 00:11:23.388 Test: spdk_spin ...[2024-11-29 11:55:28.792963] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:11:23.388 [2024-11-29 11:55:28.793057] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff88486680 00:11:23.388 [2024-11-29 11:55:28.793164] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:11:23.388 [2024-11-29 11:55:28.794874] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:23.388 [2024-11-29 11:55:28.794953] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff88486680 00:11:23.388 [2024-11-29 11:55:28.794990] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:11:23.388 [2024-11-29 11:55:28.795037] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff88486680 00:11:23.388 [2024-11-29 11:55:28.795077] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:11:23.388 [2024-11-29 11:55:28.795129] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff88486680 00:11:23.388 [2024-11-29 11:55:28.795163] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:11:23.388 [2024-11-29 11:55:28.795223] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff88486680 00:11:23.388 passed 00:11:23.388 Test: for_each_channel_and_thread_exit_race ...passed 00:11:23.388 Test: for_each_thread_and_thread_exit_race ...passed 00:11:23.388 00:11:23.388 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.388 suites 1 1 n/a 0 0 00:11:23.388 tests 20 20 20 0 0 00:11:23.388 asserts 409 409 409 0 n/a 00:11:23.388 00:11:23.388 Elapsed time = 0.050 seconds 00:11:23.388 00:11:23.388 real 0m0.091s 00:11:23.388 user 0m0.067s 00:11:23.388 sys 0m0.025s 00:11:23.388 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.388 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 ************************************ 00:11:23.388 END TEST unittest_thread 00:11:23.388 ************************************ 00:11:23.388 11:55:28 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:11:23.388 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.388 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.388 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 ************************************ 00:11:23.388 START TEST unittest_iobuf 00:11:23.388 ************************************ 00:11:23.388 11:55:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:11:23.388 00:11:23.388 00:11:23.388 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.388 http://cunit.sourceforge.net/ 00:11:23.388 00:11:23.389 00:11:23.389 Suite: io_channel 00:11:23.389 Test: iobuf ...passed 00:11:23.389 Test: iobuf_cache ...[2024-11-29 11:55:28.895077] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:11:23.389 [2024-11-29 11:55:28.896012] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:11:23.389 [2024-11-29 11:55:28.896359] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:11:23.389 [2024-11-29 11:55:28.896592] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:11:23.389 [2024-11-29 11:55:28.896861] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:11:23.389 [2024-11-29 11:55:28.897070] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:11:23.646 passed 00:11:23.646 00:11:23.646 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.646 suites 1 1 n/a 0 0 00:11:23.646 tests 2 2 2 0 0 00:11:23.646 asserts 107 107 107 0 n/a 00:11:23.646 00:11:23.646 Elapsed time = 0.007 seconds 00:11:23.646 00:11:23.646 real 0m0.042s 00:11:23.646 user 0m0.029s 00:11:23.646 sys 0m0.012s 00:11:23.646 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.646 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.646 ************************************ 00:11:23.646 END TEST unittest_iobuf 00:11:23.646 ************************************ 00:11:23.646 11:55:28 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:11:23.646 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.646 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.646 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.646 ************************************ 00:11:23.646 START TEST unittest_util 00:11:23.646 ************************************ 00:11:23.646 11:55:28 -- common/autotest_common.sh@1114 -- # unittest_util 00:11:23.646 11:55:28 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:11:23.646 00:11:23.646 00:11:23.646 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.646 http://cunit.sourceforge.net/ 00:11:23.646 00:11:23.646 00:11:23.646 Suite: base64 00:11:23.646 Test: test_base64_get_encoded_strlen ...passed 00:11:23.646 Test: test_base64_get_decoded_len ...passed 00:11:23.646 Test: test_base64_encode ...passed 00:11:23.646 Test: test_base64_decode ...passed 00:11:23.647 Test: test_base64_urlsafe_encode ...passed 00:11:23.647 Test: test_base64_urlsafe_decode ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 6 6 6 0 0 00:11:23.647 asserts 112 112 112 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.000 seconds 00:11:23.647 11:55:28 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:11:23.647 00:11:23.647 00:11:23.647 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.647 http://cunit.sourceforge.net/ 00:11:23.647 00:11:23.647 00:11:23.647 Suite: bit_array 00:11:23.647 Test: test_1bit ...passed 00:11:23.647 Test: test_64bit ...passed 00:11:23.647 Test: test_find ...passed 00:11:23.647 Test: test_resize ...passed 00:11:23.647 Test: test_errors ...passed 00:11:23.647 Test: test_count ...passed 00:11:23.647 Test: test_mask_store_load ...passed 00:11:23.647 Test: test_mask_clear ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 8 8 8 0 0 00:11:23.647 asserts 5075 5075 5075 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.001 seconds 00:11:23.647 11:55:29 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:11:23.647 00:11:23.647 00:11:23.647 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.647 http://cunit.sourceforge.net/ 00:11:23.647 00:11:23.647 00:11:23.647 Suite: cpuset 00:11:23.647 Test: test_cpuset ...passed 00:11:23.647 Test: test_cpuset_parse ...[2024-11-29 11:55:29.033172] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:11:23.647 [2024-11-29 11:55:29.033552] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:11:23.647 [2024-11-29 11:55:29.033674] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:11:23.647 [2024-11-29 11:55:29.033808] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:11:23.647 [2024-11-29 11:55:29.033877] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:11:23.647 [2024-11-29 11:55:29.033928] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:11:23.647 [2024-11-29 11:55:29.033970] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:11:23.647 [2024-11-29 11:55:29.034032] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:11:23.647 passed 00:11:23.647 Test: test_cpuset_fmt ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 3 3 3 0 0 00:11:23.647 asserts 65 65 65 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.003 seconds 00:11:23.647 11:55:29 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:11:23.647 00:11:23.647 00:11:23.647 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.647 http://cunit.sourceforge.net/ 00:11:23.647 00:11:23.647 00:11:23.647 Suite: crc16 00:11:23.647 Test: test_crc16_t10dif ...passed 00:11:23.647 Test: test_crc16_t10dif_seed ...passed 00:11:23.647 Test: test_crc16_t10dif_copy ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 3 3 3 0 0 00:11:23.647 asserts 5 5 5 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.000 seconds 00:11:23.647 11:55:29 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:11:23.647 00:11:23.647 00:11:23.647 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.647 http://cunit.sourceforge.net/ 00:11:23.647 00:11:23.647 00:11:23.647 Suite: crc32_ieee 00:11:23.647 Test: test_crc32_ieee ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 1 1 1 0 0 00:11:23.647 asserts 1 1 1 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.000 seconds 00:11:23.647 11:55:29 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:11:23.647 00:11:23.647 00:11:23.647 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.647 http://cunit.sourceforge.net/ 00:11:23.647 00:11:23.647 00:11:23.647 Suite: crc32c 00:11:23.647 Test: test_crc32c ...passed 00:11:23.647 Test: test_crc32c_nvme ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 2 2 2 0 0 00:11:23.647 asserts 16 16 16 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.000 seconds 00:11:23.647 11:55:29 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:11:23.647 00:11:23.647 00:11:23.647 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.647 http://cunit.sourceforge.net/ 00:11:23.647 00:11:23.647 00:11:23.647 Suite: crc64 00:11:23.647 Test: test_crc64_nvme ...passed 00:11:23.647 00:11:23.647 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.647 suites 1 1 n/a 0 0 00:11:23.647 tests 1 1 1 0 0 00:11:23.647 asserts 4 4 4 0 n/a 00:11:23.647 00:11:23.647 Elapsed time = 0.001 seconds 00:11:23.909 11:55:29 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:11:23.909 00:11:23.909 00:11:23.909 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.909 http://cunit.sourceforge.net/ 00:11:23.909 00:11:23.909 00:11:23.909 Suite: string 00:11:23.909 Test: test_parse_ip_addr ...passed 00:11:23.909 Test: test_str_chomp ...passed 00:11:23.909 Test: test_parse_capacity ...passed 00:11:23.909 Test: test_sprintf_append_realloc ...passed 00:11:23.909 Test: test_strtol ...passed 00:11:23.909 Test: test_strtoll ...passed 00:11:23.909 Test: test_strarray ...passed 00:11:23.909 Test: test_strcpy_replace ...passed 00:11:23.909 00:11:23.909 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.909 suites 1 1 n/a 0 0 00:11:23.909 tests 8 8 8 0 0 00:11:23.909 asserts 161 161 161 0 n/a 00:11:23.909 00:11:23.909 Elapsed time = 0.001 seconds 00:11:23.909 11:55:29 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:11:23.909 00:11:23.909 00:11:23.909 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.909 http://cunit.sourceforge.net/ 00:11:23.909 00:11:23.909 00:11:23.909 Suite: dif 00:11:23.909 Test: dif_generate_and_verify_test ...[2024-11-29 11:55:29.201790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:11:23.909 [2024-11-29 11:55:29.202401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:11:23.909 [2024-11-29 11:55:29.202709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:11:23.909 [2024-11-29 11:55:29.203005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:11:23.909 [2024-11-29 11:55:29.203296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:11:23.909 [2024-11-29 11:55:29.203594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:11:23.909 passed 00:11:23.909 Test: dif_disable_check_test ...[2024-11-29 11:55:29.204621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:11:23.909 [2024-11-29 11:55:29.204981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:11:23.909 [2024-11-29 11:55:29.205275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:11:23.909 passed 00:11:23.909 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-29 11:55:29.206442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:11:23.909 [2024-11-29 11:55:29.206768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:11:23.909 [2024-11-29 11:55:29.207094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:11:23.909 [2024-11-29 11:55:29.207455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:11:23.909 [2024-11-29 11:55:29.207789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:11:23.909 [2024-11-29 11:55:29.208109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:11:23.909 [2024-11-29 11:55:29.208424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:11:23.909 [2024-11-29 11:55:29.208735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:11:23.909 [2024-11-29 11:55:29.209044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:11:23.909 [2024-11-29 11:55:29.209374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:11:23.909 [2024-11-29 11:55:29.209714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:11:23.909 passed 00:11:23.909 Test: dif_apptag_mask_test ...[2024-11-29 11:55:29.210054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:11:23.909 [2024-11-29 11:55:29.210371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:11:23.909 passed 00:11:23.909 Test: dif_sec_512_md_0_error_test ...[2024-11-29 11:55:29.210582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:11:23.909 passed 00:11:23.909 Test: dif_sec_4096_md_0_error_test ...[2024-11-29 11:55:29.210637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:11:23.909 [2024-11-29 11:55:29.210684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:11:23.909 passed 00:11:23.909 Test: dif_sec_4100_md_128_error_test ...passed 00:11:23.909 Test: dif_guard_seed_test ...passed 00:11:23.909 Test: dif_guard_value_test ...[2024-11-29 11:55:29.210744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:11:23.909 [2024-11-29 11:55:29.210786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:11:23.909 passed 00:11:23.909 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:11:23.909 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:11:23.909 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-29 11:55:29.255184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.909 [2024-11-29 11:55:29.257662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:11:23.909 [2024-11-29 11:55:29.260123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.909 [2024-11-29 11:55:29.262620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.909 [2024-11-29 11:55:29.265104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.909 [2024-11-29 11:55:29.267563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.909 [2024-11-29 11:55:29.270016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.909 [2024-11-29 11:55:29.271192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=144c 00:11:23.909 [2024-11-29 11:55:29.272359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.909 [2024-11-29 11:55:29.274830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:11:23.909 [2024-11-29 11:55:29.277307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.909 [2024-11-29 11:55:29.279758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.909 [2024-11-29 11:55:29.282213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.909 [2024-11-29 11:55:29.284676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.910 [2024-11-29 11:55:29.287152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.910 [2024-11-29 11:55:29.288315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2a8cf4f1 00:11:23.910 [2024-11-29 11:55:29.289507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.910 [2024-11-29 11:55:29.291962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:11:23.910 [2024-11-29 11:55:29.294417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.296872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.299338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.301798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.304295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.910 [2024-11-29 11:55:29.305471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=c9f1a71ad065c7bd 00:11:23.910 passed 00:11:23.910 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-29 11:55:29.305751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.910 [2024-11-29 11:55:29.306060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:11:23.910 [2024-11-29 11:55:29.306379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.306683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.307021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.307334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.307659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.910 [2024-11-29 11:55:29.307885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=144c 00:11:23.910 [2024-11-29 11:55:29.308120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.910 [2024-11-29 11:55:29.308421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:11:23.910 [2024-11-29 11:55:29.308739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.309048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.309362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.910 [2024-11-29 11:55:29.309660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.910 [2024-11-29 11:55:29.309974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.910 [2024-11-29 11:55:29.310190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2a8cf4f1 00:11:23.910 [2024-11-29 11:55:29.310445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.910 [2024-11-29 11:55:29.310749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:11:23.910 [2024-11-29 11:55:29.311049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.311346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.311656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.311964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.312282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.910 [2024-11-29 11:55:29.312519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=c9f1a71ad065c7bd 00:11:23.910 passed 00:11:23.910 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-29 11:55:29.312794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.910 [2024-11-29 11:55:29.313107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:11:23.910 [2024-11-29 11:55:29.313406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.313709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.314049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.314372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.314682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.910 [2024-11-29 11:55:29.314910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=144c 00:11:23.910 [2024-11-29 11:55:29.315116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.910 [2024-11-29 11:55:29.315421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:11:23.910 [2024-11-29 11:55:29.315726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.316026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.316330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.910 [2024-11-29 11:55:29.316626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.910 [2024-11-29 11:55:29.316939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.910 [2024-11-29 11:55:29.317165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2a8cf4f1 00:11:23.910 [2024-11-29 11:55:29.317410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.910 [2024-11-29 11:55:29.317713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:11:23.910 [2024-11-29 11:55:29.318036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.318358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.318678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.318974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.319297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.910 [2024-11-29 11:55:29.319519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=c9f1a71ad065c7bd 00:11:23.910 passed 00:11:23.910 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-29 11:55:29.319790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.910 [2024-11-29 11:55:29.320121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:11:23.910 [2024-11-29 11:55:29.320430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.320734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.910 [2024-11-29 11:55:29.321064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.321368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.910 [2024-11-29 11:55:29.321676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.910 [2024-11-29 11:55:29.321912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=144c 00:11:23.910 [2024-11-29 11:55:29.322142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.910 [2024-11-29 11:55:29.322450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:11:23.911 [2024-11-29 11:55:29.322778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.323085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.323383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.911 [2024-11-29 11:55:29.323698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.911 [2024-11-29 11:55:29.324009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.911 [2024-11-29 11:55:29.324239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2a8cf4f1 00:11:23.911 [2024-11-29 11:55:29.324482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.911 [2024-11-29 11:55:29.324791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:11:23.911 [2024-11-29 11:55:29.325080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.325389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.325697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.326020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.326340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.911 [2024-11-29 11:55:29.326583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=c9f1a71ad065c7bd 00:11:23.911 passed 00:11:23.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-29 11:55:29.326860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.911 [2024-11-29 11:55:29.327159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:11:23.911 [2024-11-29 11:55:29.327457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.327775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.328102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.328407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.328711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.911 [2024-11-29 11:55:29.328929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=144c 00:11:23.911 passed 00:11:23.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-29 11:55:29.329209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.911 [2024-11-29 11:55:29.329514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:11:23.911 [2024-11-29 11:55:29.329850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.330172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.330490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.911 [2024-11-29 11:55:29.330800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.911 [2024-11-29 11:55:29.331103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.911 [2024-11-29 11:55:29.331316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2a8cf4f1 00:11:23.911 [2024-11-29 11:55:29.331590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.911 [2024-11-29 11:55:29.331895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:11:23.911 [2024-11-29 11:55:29.332202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.332520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.332823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.333128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.333447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.911 [2024-11-29 11:55:29.333673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=c9f1a71ad065c7bd 00:11:23.911 passed 00:11:23.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-29 11:55:29.333963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.911 [2024-11-29 11:55:29.334275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:11:23.911 [2024-11-29 11:55:29.334590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.334905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.335242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.335541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.335847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.911 [2024-11-29 11:55:29.336067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=144c 00:11:23.911 passed 00:11:23.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-29 11:55:29.336331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.911 [2024-11-29 11:55:29.336631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:11:23.911 [2024-11-29 11:55:29.336959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.337270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.337583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.911 [2024-11-29 11:55:29.337894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.911 [2024-11-29 11:55:29.338202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.911 [2024-11-29 11:55:29.338442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2a8cf4f1 00:11:23.911 [2024-11-29 11:55:29.338738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.911 [2024-11-29 11:55:29.339052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:11:23.911 [2024-11-29 11:55:29.339361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.339650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.339959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.340265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.340588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.911 [2024-11-29 11:55:29.340823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=c9f1a71ad065c7bd 00:11:23.911 passed 00:11:23.911 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:11:23.911 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:11:23.911 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:11:23.911 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:11:23.911 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:11:23.911 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:11:23.911 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:11:23.911 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:11:23.911 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:11:23.911 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-29 11:55:29.385362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.911 [2024-11-29 11:55:29.386546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:11:23.911 [2024-11-29 11:55:29.387659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.388758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.911 [2024-11-29 11:55:29.389883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.391002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.911 [2024-11-29 11:55:29.392117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.911 [2024-11-29 11:55:29.393221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c77b 00:11:23.912 [2024-11-29 11:55:29.394337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.912 [2024-11-29 11:55:29.395459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:11:23.912 [2024-11-29 11:55:29.396587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.397717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.398850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.912 [2024-11-29 11:55:29.399969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.912 [2024-11-29 11:55:29.401080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.912 [2024-11-29 11:55:29.402209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=f87df23d 00:11:23.912 [2024-11-29 11:55:29.403346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.912 [2024-11-29 11:55:29.404494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:11:23.912 [2024-11-29 11:55:29.405608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.406749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.407851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.912 [2024-11-29 11:55:29.408959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.912 [2024-11-29 11:55:29.410075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:23.912 [2024-11-29 11:55:29.411234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7996872cce885579 00:11:23.912 passed 00:11:23.912 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-29 11:55:29.411588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:23.912 [2024-11-29 11:55:29.411867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:11:23.912 [2024-11-29 11:55:29.412149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.412413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.412718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.912 [2024-11-29 11:55:29.413025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:23.912 [2024-11-29 11:55:29.413303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:23.912 [2024-11-29 11:55:29.413583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c77b 00:11:23.912 [2024-11-29 11:55:29.413870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:23.912 [2024-11-29 11:55:29.414154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:11:23.912 [2024-11-29 11:55:29.414452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.414736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:23.912 [2024-11-29 11:55:29.415011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.912 [2024-11-29 11:55:29.415285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:23.912 [2024-11-29 11:55:29.415556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:23.912 [2024-11-29 11:55:29.415826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=f87df23d 00:11:23.912 [2024-11-29 11:55:29.416122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:23.912 [2024-11-29 11:55:29.416392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:11:24.172 [2024-11-29 11:55:29.416670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.172 [2024-11-29 11:55:29.416950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.172 [2024-11-29 11:55:29.417231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.172 [2024-11-29 11:55:29.417500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.172 [2024-11-29 11:55:29.417819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:24.172 [2024-11-29 11:55:29.418108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7996872cce885579 00:11:24.172 passed 00:11:24.172 Test: dix_sec_512_md_0_error ...[2024-11-29 11:55:29.418211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:11:24.172 passed 00:11:24.172 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:11:24.172 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:11:24.172 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:11:24.172 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:11:24.172 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:11:24.172 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:11:24.172 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:11:24.172 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:11:24.172 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:11:24.172 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-29 11:55:29.461999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:24.172 [2024-11-29 11:55:29.463160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:11:24.172 [2024-11-29 11:55:29.464276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.172 [2024-11-29 11:55:29.465384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.172 [2024-11-29 11:55:29.466609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.172 [2024-11-29 11:55:29.467740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.468839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:24.173 [2024-11-29 11:55:29.469969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c77b 00:11:24.173 [2024-11-29 11:55:29.471105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:24.173 [2024-11-29 11:55:29.472239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:11:24.173 [2024-11-29 11:55:29.473384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.474515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.475633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:24.173 [2024-11-29 11:55:29.476740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:24.173 [2024-11-29 11:55:29.477863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:24.173 [2024-11-29 11:55:29.478974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=f87df23d 00:11:24.173 [2024-11-29 11:55:29.480115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:24.173 [2024-11-29 11:55:29.481216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:11:24.173 [2024-11-29 11:55:29.482361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.483468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.484575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.485661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.486815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:24.173 [2024-11-29 11:55:29.487919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7996872cce885579 00:11:24.173 passed 00:11:24.173 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-29 11:55:29.488278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:11:24.173 [2024-11-29 11:55:29.488560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:11:24.173 [2024-11-29 11:55:29.488837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.489121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.489418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.489699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.489994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d66a 00:11:24.173 [2024-11-29 11:55:29.490268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c77b 00:11:24.173 [2024-11-29 11:55:29.490557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:11:24.173 [2024-11-29 11:55:29.490828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:11:24.173 [2024-11-29 11:55:29.491115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.491383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.491653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:24.173 [2024-11-29 11:55:29.491920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:11:24.173 [2024-11-29 11:55:29.492189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2238898d 00:11:24.173 [2024-11-29 11:55:29.492465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=f87df23d 00:11:24.173 [2024-11-29 11:55:29.492744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:11:24.173 [2024-11-29 11:55:29.493020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:11:24.173 [2024-11-29 11:55:29.493282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.493552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:11:24.173 [2024-11-29 11:55:29.493837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.494115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:11:24.173 [2024-11-29 11:55:29.494402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=756f40d640049ac1 00:11:24.173 [2024-11-29 11:55:29.494677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7996872cce885579 00:11:24.173 passed 00:11:24.173 Test: set_md_interleave_iovs_test ...passed 00:11:24.173 Test: set_md_interleave_iovs_split_test ...passed 00:11:24.173 Test: dif_generate_stream_pi_16_test ...passed 00:11:24.173 Test: dif_generate_stream_test ...passed 00:11:24.173 Test: set_md_interleave_iovs_alignment_test ...[2024-11-29 11:55:29.502190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:11:24.173 passed 00:11:24.173 Test: dif_generate_split_test ...passed 00:11:24.173 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:11:24.173 Test: dif_verify_split_test ...passed 00:11:24.173 Test: dif_verify_stream_multi_segments_test ...passed 00:11:24.173 Test: update_crc32c_pi_16_test ...passed 00:11:24.173 Test: update_crc32c_test ...passed 00:11:24.173 Test: dif_update_crc32c_split_test ...passed 00:11:24.173 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:11:24.173 Test: get_range_with_md_test ...passed 00:11:24.173 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:11:24.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:11:24.173 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:11:24.173 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:11:24.173 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:11:24.173 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:11:24.173 Test: dif_generate_and_verify_unmap_test ...passed 00:11:24.173 00:11:24.173 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.173 suites 1 1 n/a 0 0 00:11:24.173 tests 79 79 79 0 0 00:11:24.173 asserts 3584 3584 3584 0 n/a 00:11:24.173 00:11:24.173 Elapsed time = 0.348 seconds 00:11:24.173 11:55:29 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:11:24.173 00:11:24.173 00:11:24.173 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.173 http://cunit.sourceforge.net/ 00:11:24.173 00:11:24.173 00:11:24.173 Suite: iov 00:11:24.173 Test: test_single_iov ...passed 00:11:24.173 Test: test_simple_iov ...passed 00:11:24.173 Test: test_complex_iov ...passed 00:11:24.173 Test: test_iovs_to_buf ...passed 00:11:24.173 Test: test_buf_to_iovs ...passed 00:11:24.173 Test: test_memset ...passed 00:11:24.173 Test: test_iov_one ...passed 00:11:24.173 Test: test_iov_xfer ...passed 00:11:24.173 00:11:24.173 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.173 suites 1 1 n/a 0 0 00:11:24.173 tests 8 8 8 0 0 00:11:24.173 asserts 156 156 156 0 n/a 00:11:24.173 00:11:24.173 Elapsed time = 0.000 seconds 00:11:24.173 11:55:29 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:11:24.173 00:11:24.173 00:11:24.173 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.173 http://cunit.sourceforge.net/ 00:11:24.173 00:11:24.173 00:11:24.173 Suite: math 00:11:24.173 Test: test_serial_number_arithmetic ...passed 00:11:24.173 Suite: erase 00:11:24.173 Test: test_memset_s ...passed 00:11:24.173 00:11:24.173 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.173 suites 2 2 n/a 0 0 00:11:24.173 tests 2 2 2 0 0 00:11:24.173 asserts 18 18 18 0 n/a 00:11:24.173 00:11:24.173 Elapsed time = 0.000 seconds 00:11:24.173 11:55:29 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:11:24.173 00:11:24.173 00:11:24.173 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.173 http://cunit.sourceforge.net/ 00:11:24.173 00:11:24.173 00:11:24.173 Suite: pipe 00:11:24.173 Test: test_create_destroy ...passed 00:11:24.173 Test: test_write_get_buffer ...passed 00:11:24.173 Test: test_write_advance ...passed 00:11:24.173 Test: test_read_get_buffer ...passed 00:11:24.173 Test: test_read_advance ...passed 00:11:24.173 Test: test_data ...passed 00:11:24.173 00:11:24.173 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.173 suites 1 1 n/a 0 0 00:11:24.173 tests 6 6 6 0 0 00:11:24.173 asserts 250 250 250 0 n/a 00:11:24.173 00:11:24.173 Elapsed time = 0.000 seconds 00:11:24.174 11:55:29 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:11:24.174 00:11:24.174 00:11:24.174 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.174 http://cunit.sourceforge.net/ 00:11:24.174 00:11:24.174 00:11:24.174 Suite: xor 00:11:24.174 Test: test_xor_gen ...passed 00:11:24.174 00:11:24.174 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.174 suites 1 1 n/a 0 0 00:11:24.174 tests 1 1 1 0 0 00:11:24.174 asserts 17 17 17 0 n/a 00:11:24.174 00:11:24.174 Elapsed time = 0.007 seconds 00:11:24.434 00:11:24.434 real 0m0.724s 00:11:24.434 user 0m0.557s 00:11:24.434 sys 0m0.173s 00:11:24.434 11:55:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.434 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.434 ************************************ 00:11:24.434 END TEST unittest_util 00:11:24.434 ************************************ 00:11:24.434 11:55:29 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:11:24.434 11:55:29 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:11:24.434 11:55:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.434 11:55:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.434 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.434 ************************************ 00:11:24.434 START TEST unittest_vhost 00:11:24.434 ************************************ 00:11:24.434 11:55:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:11:24.434 00:11:24.434 00:11:24.434 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.434 http://cunit.sourceforge.net/ 00:11:24.434 00:11:24.434 00:11:24.434 Suite: vhost_suite 00:11:24.434 Test: desc_to_iov_test ...[2024-11-29 11:55:29.758037] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:11:24.434 passed 00:11:24.434 Test: create_controller_test ...[2024-11-29 11:55:29.763221] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:11:24.434 [2024-11-29 11:55:29.763359] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:11:24.434 [2024-11-29 11:55:29.763494] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:11:24.434 [2024-11-29 11:55:29.763612] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:11:24.434 [2024-11-29 11:55:29.763692] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:11:24.434 [2024-11-29 11:55:29.763841] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-11-29 11:55:29.764999] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:11:24.434 passed 00:11:24.434 Test: session_find_by_vid_test ...passed 00:11:24.434 Test: remove_controller_test ...[2024-11-29 11:55:29.767479] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:11:24.434 passed 00:11:24.434 Test: vq_avail_ring_get_test ...passed 00:11:24.434 Test: vq_packed_ring_test ...passed 00:11:24.434 Test: vhost_blk_construct_test ...passed 00:11:24.434 00:11:24.434 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.434 suites 1 1 n/a 0 0 00:11:24.434 tests 7 7 7 0 0 00:11:24.434 asserts 145 145 145 0 n/a 00:11:24.434 00:11:24.434 Elapsed time = 0.014 seconds 00:11:24.434 00:11:24.434 real 0m0.048s 00:11:24.434 user 0m0.033s 00:11:24.434 sys 0m0.015s 00:11:24.434 11:55:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.434 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.434 ************************************ 00:11:24.434 END TEST unittest_vhost 00:11:24.434 ************************************ 00:11:24.435 11:55:29 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:11:24.435 11:55:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.435 11:55:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.435 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 ************************************ 00:11:24.435 START TEST unittest_dma 00:11:24.435 ************************************ 00:11:24.435 11:55:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:11:24.435 00:11:24.435 00:11:24.435 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.435 http://cunit.sourceforge.net/ 00:11:24.435 00:11:24.435 00:11:24.435 Suite: dma_suite 00:11:24.435 Test: test_dma ...[2024-11-29 11:55:29.850438] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:11:24.435 passed 00:11:24.435 00:11:24.435 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.435 suites 1 1 n/a 0 0 00:11:24.435 tests 1 1 1 0 0 00:11:24.435 asserts 50 50 50 0 n/a 00:11:24.435 00:11:24.435 Elapsed time = 0.001 seconds 00:11:24.435 00:11:24.435 real 0m0.026s 00:11:24.435 user 0m0.017s 00:11:24.435 sys 0m0.009s 00:11:24.435 11:55:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.435 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 ************************************ 00:11:24.435 END TEST unittest_dma 00:11:24.435 ************************************ 00:11:24.435 11:55:29 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:11:24.435 11:55:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.435 11:55:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.435 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 ************************************ 00:11:24.435 START TEST unittest_init 00:11:24.435 ************************************ 00:11:24.435 11:55:29 -- common/autotest_common.sh@1114 -- # unittest_init 00:11:24.435 11:55:29 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:11:24.435 00:11:24.435 00:11:24.435 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.435 http://cunit.sourceforge.net/ 00:11:24.435 00:11:24.435 00:11:24.435 Suite: subsystem_suite 00:11:24.435 Test: subsystem_sort_test_depends_on_single ...passed 00:11:24.435 Test: subsystem_sort_test_depends_on_multiple ...passed 00:11:24.435 Test: subsystem_sort_test_missing_dependency ...[2024-11-29 11:55:29.933238] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:11:24.435 [2024-11-29 11:55:29.934277] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:11:24.435 passed 00:11:24.435 00:11:24.435 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.435 suites 1 1 n/a 0 0 00:11:24.435 tests 3 3 3 0 0 00:11:24.435 asserts 20 20 20 0 n/a 00:11:24.435 00:11:24.435 Elapsed time = 0.001 seconds 00:11:24.693 00:11:24.693 real 0m0.035s 00:11:24.693 user 0m0.013s 00:11:24.693 sys 0m0.023s 00:11:24.693 11:55:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.693 ************************************ 00:11:24.693 END TEST unittest_init 00:11:24.693 ************************************ 00:11:24.693 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.693 11:55:29 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:11:24.693 11:55:29 -- unit/unittest.sh@266 -- # hostname 00:11:24.693 11:55:29 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:11:24.693 geninfo: WARNING: invalid characters removed from testname! 00:11:56.758 11:55:57 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:11:57.016 11:56:02 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:12:00.306 11:56:05 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:12:02.917 11:56:08 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:12:06.201 11:56:11 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:12:08.733 11:56:14 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:12:12.018 11:56:16 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:12:12.018 11:56:16 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:12:12.337 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:12:12.337 Found 309 entries. 00:12:12.337 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:12:12.337 Writing .css and .png files. 00:12:12.337 Generating output. 00:12:12.337 Processing file include/linux/virtio_ring.h 00:12:12.595 Processing file include/spdk/nvme.h 00:12:12.595 Processing file include/spdk/histogram_data.h 00:12:12.595 Processing file include/spdk/trace.h 00:12:12.595 Processing file include/spdk/thread.h 00:12:12.595 Processing file include/spdk/base64.h 00:12:12.595 Processing file include/spdk/bdev_module.h 00:12:12.595 Processing file include/spdk/nvmf_transport.h 00:12:12.595 Processing file include/spdk/endian.h 00:12:12.595 Processing file include/spdk/mmio.h 00:12:12.595 Processing file include/spdk/nvme_spec.h 00:12:12.595 Processing file include/spdk/util.h 00:12:12.853 Processing file include/spdk_internal/rdma.h 00:12:12.853 Processing file include/spdk_internal/sock.h 00:12:12.853 Processing file include/spdk_internal/nvme_tcp.h 00:12:12.853 Processing file include/spdk_internal/utf.h 00:12:12.853 Processing file include/spdk_internal/sgl.h 00:12:12.853 Processing file include/spdk_internal/virtio.h 00:12:12.853 Processing file lib/accel/accel.c 00:12:12.853 Processing file lib/accel/accel_rpc.c 00:12:12.853 Processing file lib/accel/accel_sw.c 00:12:13.418 Processing file lib/bdev/bdev_rpc.c 00:12:13.418 Processing file lib/bdev/part.c 00:12:13.418 Processing file lib/bdev/bdev_zone.c 00:12:13.418 Processing file lib/bdev/scsi_nvme.c 00:12:13.418 Processing file lib/bdev/bdev.c 00:12:13.418 Processing file lib/blob/blob_bs_dev.c 00:12:13.418 Processing file lib/blob/request.c 00:12:13.418 Processing file lib/blob/zeroes.c 00:12:13.418 Processing file lib/blob/blobstore.c 00:12:13.418 Processing file lib/blob/blobstore.h 00:12:13.676 Processing file lib/blobfs/tree.c 00:12:13.676 Processing file lib/blobfs/blobfs.c 00:12:13.676 Processing file lib/conf/conf.c 00:12:13.676 Processing file lib/dma/dma.c 00:12:13.935 Processing file lib/env_dpdk/sigbus_handler.c 00:12:13.935 Processing file lib/env_dpdk/pci_dpdk.c 00:12:13.935 Processing file lib/env_dpdk/env.c 00:12:13.935 Processing file lib/env_dpdk/threads.c 00:12:13.935 Processing file lib/env_dpdk/pci_idxd.c 00:12:13.935 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:12:13.935 Processing file lib/env_dpdk/pci_ioat.c 00:12:13.935 Processing file lib/env_dpdk/pci.c 00:12:13.935 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:12:13.935 Processing file lib/env_dpdk/init.c 00:12:13.935 Processing file lib/env_dpdk/memory.c 00:12:13.935 Processing file lib/env_dpdk/pci_event.c 00:12:13.935 Processing file lib/env_dpdk/pci_vmd.c 00:12:13.935 Processing file lib/env_dpdk/pci_virtio.c 00:12:14.194 Processing file lib/event/app.c 00:12:14.194 Processing file lib/event/scheduler_static.c 00:12:14.194 Processing file lib/event/log_rpc.c 00:12:14.194 Processing file lib/event/app_rpc.c 00:12:14.194 Processing file lib/event/reactor.c 00:12:14.761 Processing file lib/ftl/ftl_trace.c 00:12:14.761 Processing file lib/ftl/ftl_writer.c 00:12:14.761 Processing file lib/ftl/ftl_p2l.c 00:12:14.761 Processing file lib/ftl/ftl_core.c 00:12:14.761 Processing file lib/ftl/ftl_layout.c 00:12:14.761 Processing file lib/ftl/ftl_io.c 00:12:14.761 Processing file lib/ftl/ftl_init.c 00:12:14.761 Processing file lib/ftl/ftl_nv_cache_io.h 00:12:14.761 Processing file lib/ftl/ftl_l2p_cache.c 00:12:14.761 Processing file lib/ftl/ftl_l2p_flat.c 00:12:14.761 Processing file lib/ftl/ftl_debug.h 00:12:14.761 Processing file lib/ftl/ftl_writer.h 00:12:14.761 Processing file lib/ftl/ftl_core.h 00:12:14.761 Processing file lib/ftl/ftl_band_ops.c 00:12:14.761 Processing file lib/ftl/ftl_io.h 00:12:14.761 Processing file lib/ftl/ftl_band.c 00:12:14.761 Processing file lib/ftl/ftl_sb.c 00:12:14.761 Processing file lib/ftl/ftl_reloc.c 00:12:14.761 Processing file lib/ftl/ftl_rq.c 00:12:14.761 Processing file lib/ftl/ftl_nv_cache.c 00:12:14.761 Processing file lib/ftl/ftl_nv_cache.h 00:12:14.761 Processing file lib/ftl/ftl_l2p.c 00:12:14.761 Processing file lib/ftl/ftl_debug.c 00:12:14.761 Processing file lib/ftl/ftl_band.h 00:12:14.761 Processing file lib/ftl/base/ftl_base_bdev.c 00:12:14.761 Processing file lib/ftl/base/ftl_base_dev.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:12:15.019 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:12:15.019 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:12:15.019 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:12:15.277 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:12:15.277 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:12:15.277 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:12:15.277 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:12:15.533 Processing file lib/ftl/utils/ftl_property.c 00:12:15.533 Processing file lib/ftl/utils/ftl_mempool.c 00:12:15.533 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:12:15.533 Processing file lib/ftl/utils/ftl_md.c 00:12:15.533 Processing file lib/ftl/utils/ftl_df.h 00:12:15.533 Processing file lib/ftl/utils/ftl_conf.c 00:12:15.533 Processing file lib/ftl/utils/ftl_bitmap.c 00:12:15.533 Processing file lib/ftl/utils/ftl_property.h 00:12:15.533 Processing file lib/ftl/utils/ftl_addr_utils.h 00:12:15.533 Processing file lib/idxd/idxd_user.c 00:12:15.533 Processing file lib/idxd/idxd.c 00:12:15.533 Processing file lib/idxd/idxd_internal.h 00:12:15.791 Processing file lib/init/rpc.c 00:12:15.791 Processing file lib/init/subsystem.c 00:12:15.791 Processing file lib/init/subsystem_rpc.c 00:12:15.791 Processing file lib/init/json_config.c 00:12:15.791 Processing file lib/ioat/ioat.c 00:12:15.791 Processing file lib/ioat/ioat_internal.h 00:12:16.049 Processing file lib/iscsi/init_grp.c 00:12:16.049 Processing file lib/iscsi/portal_grp.c 00:12:16.049 Processing file lib/iscsi/md5.c 00:12:16.049 Processing file lib/iscsi/tgt_node.c 00:12:16.049 Processing file lib/iscsi/iscsi_subsystem.c 00:12:16.049 Processing file lib/iscsi/param.c 00:12:16.049 Processing file lib/iscsi/iscsi.c 00:12:16.049 Processing file lib/iscsi/conn.c 00:12:16.049 Processing file lib/iscsi/iscsi.h 00:12:16.049 Processing file lib/iscsi/iscsi_rpc.c 00:12:16.049 Processing file lib/iscsi/task.c 00:12:16.049 Processing file lib/iscsi/task.h 00:12:16.306 Processing file lib/json/json_util.c 00:12:16.306 Processing file lib/json/json_write.c 00:12:16.306 Processing file lib/json/json_parse.c 00:12:16.306 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:12:16.306 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:12:16.306 Processing file lib/jsonrpc/jsonrpc_server.c 00:12:16.306 Processing file lib/jsonrpc/jsonrpc_client.c 00:12:16.564 Processing file lib/log/log_deprecated.c 00:12:16.564 Processing file lib/log/log.c 00:12:16.564 Processing file lib/log/log_flags.c 00:12:16.564 Processing file lib/lvol/lvol.c 00:12:16.564 Processing file lib/nbd/nbd.c 00:12:16.564 Processing file lib/nbd/nbd_rpc.c 00:12:16.821 Processing file lib/notify/notify.c 00:12:16.822 Processing file lib/notify/notify_rpc.c 00:12:17.388 Processing file lib/nvme/nvme_ns_cmd.c 00:12:17.388 Processing file lib/nvme/nvme_ns.c 00:12:17.388 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:12:17.388 Processing file lib/nvme/nvme_internal.h 00:12:17.388 Processing file lib/nvme/nvme_tcp.c 00:12:17.388 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:12:17.388 Processing file lib/nvme/nvme_qpair.c 00:12:17.388 Processing file lib/nvme/nvme_ctrlr.c 00:12:17.388 Processing file lib/nvme/nvme_transport.c 00:12:17.388 Processing file lib/nvme/nvme_quirks.c 00:12:17.388 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:12:17.388 Processing file lib/nvme/nvme_pcie_internal.h 00:12:17.388 Processing file lib/nvme/nvme_cuse.c 00:12:17.388 Processing file lib/nvme/nvme_zns.c 00:12:17.388 Processing file lib/nvme/nvme_rdma.c 00:12:17.388 Processing file lib/nvme/nvme_pcie.c 00:12:17.388 Processing file lib/nvme/nvme.c 00:12:17.388 Processing file lib/nvme/nvme_discovery.c 00:12:17.388 Processing file lib/nvme/nvme_opal.c 00:12:17.388 Processing file lib/nvme/nvme_poll_group.c 00:12:17.388 Processing file lib/nvme/nvme_fabric.c 00:12:17.388 Processing file lib/nvme/nvme_pcie_common.c 00:12:17.388 Processing file lib/nvme/nvme_vfio_user.c 00:12:17.388 Processing file lib/nvme/nvme_io_msg.c 00:12:17.953 Processing file lib/nvmf/rdma.c 00:12:17.953 Processing file lib/nvmf/tcp.c 00:12:17.953 Processing file lib/nvmf/ctrlr.c 00:12:17.953 Processing file lib/nvmf/subsystem.c 00:12:17.953 Processing file lib/nvmf/ctrlr_bdev.c 00:12:17.953 Processing file lib/nvmf/nvmf_internal.h 00:12:17.953 Processing file lib/nvmf/transport.c 00:12:17.953 Processing file lib/nvmf/nvmf_rpc.c 00:12:17.953 Processing file lib/nvmf/ctrlr_discovery.c 00:12:17.953 Processing file lib/nvmf/nvmf.c 00:12:18.212 Processing file lib/rdma/rdma_verbs.c 00:12:18.212 Processing file lib/rdma/common.c 00:12:18.212 Processing file lib/rpc/rpc.c 00:12:18.471 Processing file lib/scsi/scsi.c 00:12:18.471 Processing file lib/scsi/scsi_pr.c 00:12:18.471 Processing file lib/scsi/lun.c 00:12:18.471 Processing file lib/scsi/task.c 00:12:18.471 Processing file lib/scsi/scsi_bdev.c 00:12:18.471 Processing file lib/scsi/scsi_rpc.c 00:12:18.471 Processing file lib/scsi/dev.c 00:12:18.471 Processing file lib/scsi/port.c 00:12:18.471 Processing file lib/sock/sock_rpc.c 00:12:18.471 Processing file lib/sock/sock.c 00:12:18.471 Processing file lib/thread/thread.c 00:12:18.471 Processing file lib/thread/iobuf.c 00:12:18.729 Processing file lib/trace/trace_flags.c 00:12:18.729 Processing file lib/trace/trace.c 00:12:18.729 Processing file lib/trace/trace_rpc.c 00:12:18.729 Processing file lib/trace_parser/trace.cpp 00:12:18.729 Processing file lib/ut/ut.c 00:12:18.988 Processing file lib/ut_mock/mock.c 00:12:19.247 Processing file lib/util/crc32c.c 00:12:19.247 Processing file lib/util/base64.c 00:12:19.247 Processing file lib/util/bit_array.c 00:12:19.247 Processing file lib/util/math.c 00:12:19.247 Processing file lib/util/cpuset.c 00:12:19.247 Processing file lib/util/file.c 00:12:19.247 Processing file lib/util/zipf.c 00:12:19.247 Processing file lib/util/crc32_ieee.c 00:12:19.247 Processing file lib/util/crc64.c 00:12:19.247 Processing file lib/util/hexlify.c 00:12:19.247 Processing file lib/util/crc32.c 00:12:19.247 Processing file lib/util/xor.c 00:12:19.247 Processing file lib/util/fd_group.c 00:12:19.247 Processing file lib/util/iov.c 00:12:19.247 Processing file lib/util/pipe.c 00:12:19.247 Processing file lib/util/crc16.c 00:12:19.247 Processing file lib/util/fd.c 00:12:19.247 Processing file lib/util/uuid.c 00:12:19.247 Processing file lib/util/string.c 00:12:19.247 Processing file lib/util/dif.c 00:12:19.247 Processing file lib/util/strerror_tls.c 00:12:19.247 Processing file lib/vfio_user/host/vfio_user_pci.c 00:12:19.247 Processing file lib/vfio_user/host/vfio_user.c 00:12:19.506 Processing file lib/vhost/vhost_scsi.c 00:12:19.506 Processing file lib/vhost/rte_vhost_user.c 00:12:19.506 Processing file lib/vhost/vhost_blk.c 00:12:19.506 Processing file lib/vhost/vhost_rpc.c 00:12:19.506 Processing file lib/vhost/vhost_internal.h 00:12:19.506 Processing file lib/vhost/vhost.c 00:12:19.767 Processing file lib/virtio/virtio_vhost_user.c 00:12:19.767 Processing file lib/virtio/virtio_pci.c 00:12:19.767 Processing file lib/virtio/virtio_vfio_user.c 00:12:19.767 Processing file lib/virtio/virtio.c 00:12:19.767 Processing file lib/vmd/vmd.c 00:12:19.767 Processing file lib/vmd/led.c 00:12:19.767 Processing file module/accel/dsa/accel_dsa.c 00:12:19.767 Processing file module/accel/dsa/accel_dsa_rpc.c 00:12:19.767 Processing file module/accel/error/accel_error_rpc.c 00:12:19.767 Processing file module/accel/error/accel_error.c 00:12:20.029 Processing file module/accel/iaa/accel_iaa.c 00:12:20.029 Processing file module/accel/iaa/accel_iaa_rpc.c 00:12:20.029 Processing file module/accel/ioat/accel_ioat_rpc.c 00:12:20.029 Processing file module/accel/ioat/accel_ioat.c 00:12:20.029 Processing file module/bdev/aio/bdev_aio_rpc.c 00:12:20.029 Processing file module/bdev/aio/bdev_aio.c 00:12:20.287 Processing file module/bdev/delay/vbdev_delay.c 00:12:20.287 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:12:20.287 Processing file module/bdev/error/vbdev_error_rpc.c 00:12:20.287 Processing file module/bdev/error/vbdev_error.c 00:12:20.287 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:12:20.287 Processing file module/bdev/ftl/bdev_ftl.c 00:12:20.545 Processing file module/bdev/gpt/gpt.h 00:12:20.545 Processing file module/bdev/gpt/vbdev_gpt.c 00:12:20.545 Processing file module/bdev/gpt/gpt.c 00:12:20.545 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:12:20.545 Processing file module/bdev/iscsi/bdev_iscsi.c 00:12:20.545 Processing file module/bdev/lvol/vbdev_lvol.c 00:12:20.545 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:12:20.827 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:12:20.827 Processing file module/bdev/malloc/bdev_malloc.c 00:12:20.827 Processing file module/bdev/null/bdev_null_rpc.c 00:12:20.827 Processing file module/bdev/null/bdev_null.c 00:12:21.086 Processing file module/bdev/nvme/nvme_rpc.c 00:12:21.086 Processing file module/bdev/nvme/bdev_nvme.c 00:12:21.086 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:12:21.086 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:12:21.086 Processing file module/bdev/nvme/bdev_mdns_client.c 00:12:21.086 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:12:21.086 Processing file module/bdev/nvme/vbdev_opal.c 00:12:21.086 Processing file module/bdev/passthru/vbdev_passthru.c 00:12:21.086 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:12:21.345 Processing file module/bdev/raid/bdev_raid_rpc.c 00:12:21.345 Processing file module/bdev/raid/bdev_raid.c 00:12:21.345 Processing file module/bdev/raid/concat.c 00:12:21.345 Processing file module/bdev/raid/bdev_raid_sb.c 00:12:21.345 Processing file module/bdev/raid/raid1.c 00:12:21.345 Processing file module/bdev/raid/raid0.c 00:12:21.345 Processing file module/bdev/raid/raid5f.c 00:12:21.345 Processing file module/bdev/raid/bdev_raid.h 00:12:21.604 Processing file module/bdev/split/vbdev_split.c 00:12:21.604 Processing file module/bdev/split/vbdev_split_rpc.c 00:12:21.604 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:12:21.604 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:12:21.604 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:12:21.604 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:12:21.604 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:12:21.862 Processing file module/blob/bdev/blob_bdev.c 00:12:21.862 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:12:21.862 Processing file module/blobfs/bdev/blobfs_bdev.c 00:12:21.862 Processing file module/env_dpdk/env_dpdk_rpc.c 00:12:21.862 Processing file module/event/subsystems/accel/accel.c 00:12:22.120 Processing file module/event/subsystems/bdev/bdev.c 00:12:22.120 Processing file module/event/subsystems/iobuf/iobuf.c 00:12:22.120 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:12:22.120 Processing file module/event/subsystems/iscsi/iscsi.c 00:12:22.120 Processing file module/event/subsystems/nbd/nbd.c 00:12:22.379 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:12:22.379 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:12:22.379 Processing file module/event/subsystems/scheduler/scheduler.c 00:12:22.379 Processing file module/event/subsystems/scsi/scsi.c 00:12:22.379 Processing file module/event/subsystems/sock/sock.c 00:12:22.638 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:12:22.638 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:12:22.638 Processing file module/event/subsystems/vmd/vmd.c 00:12:22.638 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:12:22.638 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:12:22.897 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:12:22.897 Processing file module/scheduler/gscheduler/gscheduler.c 00:12:22.897 Processing file module/sock/sock_kernel.h 00:12:22.897 Processing file module/sock/posix/posix.c 00:12:22.897 Writing directory view page. 00:12:22.897 Overall coverage rate: 00:12:22.897 lines......: 39.1% (39266 of 100435 lines) 00:12:22.897 functions..: 42.8% (3587 of 8384 functions) 00:12:22.897 00:12:22.897 00:12:22.897 11:56:28 -- unit/unittest.sh@277 -- # set +x 00:12:22.897 ===================== 00:12:22.897 All unit tests passed 00:12:22.897 ===================== 00:12:22.897 WARN: lcov not installed or SPDK built without coverage! 00:12:22.897 00:12:22.897 00:12:22.897 00:12:22.897 real 3m16.765s 00:12:22.897 user 2m52.703s 00:12:22.897 sys 0m14.592s 00:12:22.898 11:56:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.898 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:22.898 ************************************ 00:12:22.898 END TEST unittest 00:12:22.898 ************************************ 00:12:23.157 11:56:28 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:12:23.157 11:56:28 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:12:23.157 11:56:28 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:12:23.157 11:56:28 -- spdk/autotest.sh@160 -- # timing_enter lib 00:12:23.157 11:56:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.157 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.157 11:56:28 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:23.157 11:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:23.157 11:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.157 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.157 ************************************ 00:12:23.157 START TEST env 00:12:23.157 ************************************ 00:12:23.157 11:56:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:23.157 * Looking for test storage... 00:12:23.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:12:23.157 11:56:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:23.157 11:56:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:23.157 11:56:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:23.157 11:56:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:23.157 11:56:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:23.157 11:56:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:23.157 11:56:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:23.157 11:56:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:23.157 11:56:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:23.157 11:56:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.157 11:56:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:23.157 11:56:28 -- scripts/common.sh@337 -- # local 'op=<' 00:12:23.157 11:56:28 -- scripts/common.sh@339 -- # ver1_l=2 00:12:23.157 11:56:28 -- scripts/common.sh@340 -- # ver2_l=1 00:12:23.157 11:56:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:23.157 11:56:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:23.157 11:56:28 -- scripts/common.sh@344 -- # : 1 00:12:23.157 11:56:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:23.157 11:56:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.157 11:56:28 -- scripts/common.sh@364 -- # decimal 1 00:12:23.157 11:56:28 -- scripts/common.sh@352 -- # local d=1 00:12:23.157 11:56:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.157 11:56:28 -- scripts/common.sh@354 -- # echo 1 00:12:23.157 11:56:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:23.157 11:56:28 -- scripts/common.sh@365 -- # decimal 2 00:12:23.157 11:56:28 -- scripts/common.sh@352 -- # local d=2 00:12:23.157 11:56:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.157 11:56:28 -- scripts/common.sh@354 -- # echo 2 00:12:23.157 11:56:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:23.157 11:56:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:23.157 11:56:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:23.157 11:56:28 -- scripts/common.sh@367 -- # return 0 00:12:23.157 11:56:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.157 11:56:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:23.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.157 --rc genhtml_branch_coverage=1 00:12:23.157 --rc genhtml_function_coverage=1 00:12:23.157 --rc genhtml_legend=1 00:12:23.157 --rc geninfo_all_blocks=1 00:12:23.157 --rc geninfo_unexecuted_blocks=1 00:12:23.157 00:12:23.157 ' 00:12:23.157 11:56:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:23.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.157 --rc genhtml_branch_coverage=1 00:12:23.157 --rc genhtml_function_coverage=1 00:12:23.157 --rc genhtml_legend=1 00:12:23.157 --rc geninfo_all_blocks=1 00:12:23.157 --rc geninfo_unexecuted_blocks=1 00:12:23.157 00:12:23.157 ' 00:12:23.157 11:56:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:23.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.157 --rc genhtml_branch_coverage=1 00:12:23.157 --rc genhtml_function_coverage=1 00:12:23.157 --rc genhtml_legend=1 00:12:23.157 --rc geninfo_all_blocks=1 00:12:23.157 --rc geninfo_unexecuted_blocks=1 00:12:23.157 00:12:23.157 ' 00:12:23.158 11:56:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:23.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.158 --rc genhtml_branch_coverage=1 00:12:23.158 --rc genhtml_function_coverage=1 00:12:23.158 --rc genhtml_legend=1 00:12:23.158 --rc geninfo_all_blocks=1 00:12:23.158 --rc geninfo_unexecuted_blocks=1 00:12:23.158 00:12:23.158 ' 00:12:23.158 11:56:28 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:23.158 11:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:23.158 11:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.158 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.158 ************************************ 00:12:23.158 START TEST env_memory 00:12:23.158 ************************************ 00:12:23.158 11:56:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:23.158 00:12:23.158 00:12:23.158 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.158 http://cunit.sourceforge.net/ 00:12:23.158 00:12:23.158 00:12:23.158 Suite: memory 00:12:23.417 Test: alloc and free memory map ...[2024-11-29 11:56:28.699900] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:12:23.417 passed 00:12:23.417 Test: mem map translation ...[2024-11-29 11:56:28.748951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:12:23.417 [2024-11-29 11:56:28.749119] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:12:23.417 [2024-11-29 11:56:28.749238] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:12:23.417 [2024-11-29 11:56:28.749324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:12:23.417 passed 00:12:23.417 Test: mem map registration ...[2024-11-29 11:56:28.839294] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:12:23.417 [2024-11-29 11:56:28.839498] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:12:23.417 passed 00:12:23.675 Test: mem map adjacent registrations ...passed 00:12:23.675 00:12:23.675 Run Summary: Type Total Ran Passed Failed Inactive 00:12:23.675 suites 1 1 n/a 0 0 00:12:23.675 tests 4 4 4 0 0 00:12:23.675 asserts 152 152 152 0 n/a 00:12:23.675 00:12:23.675 Elapsed time = 0.306 seconds 00:12:23.675 00:12:23.675 real 0m0.339s 00:12:23.675 user 0m0.301s 00:12:23.675 sys 0m0.038s 00:12:23.675 11:56:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:23.675 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.675 ************************************ 00:12:23.675 END TEST env_memory 00:12:23.675 ************************************ 00:12:23.675 11:56:29 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:23.675 11:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:23.675 11:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.675 11:56:29 -- common/autotest_common.sh@10 -- # set +x 00:12:23.675 ************************************ 00:12:23.675 START TEST env_vtophys 00:12:23.675 ************************************ 00:12:23.675 11:56:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:23.675 EAL: lib.eal log level changed from notice to debug 00:12:23.675 EAL: Detected lcore 0 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 1 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 2 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 3 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 4 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 5 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 6 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 7 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 8 as core 0 on socket 0 00:12:23.675 EAL: Detected lcore 9 as core 0 on socket 0 00:12:23.675 EAL: Maximum logical cores by configuration: 128 00:12:23.675 EAL: Detected CPU lcores: 10 00:12:23.675 EAL: Detected NUMA nodes: 1 00:12:23.675 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:12:23.675 EAL: Checking presence of .so 'librte_eal.so.23' 00:12:23.675 EAL: Checking presence of .so 'librte_eal.so' 00:12:23.675 EAL: Detected static linkage of DPDK 00:12:23.675 EAL: No shared files mode enabled, IPC will be disabled 00:12:23.675 EAL: Selected IOVA mode 'PA' 00:12:23.675 EAL: Probing VFIO support... 00:12:23.675 EAL: IOMMU type 1 (Type 1) is supported 00:12:23.675 EAL: IOMMU type 7 (sPAPR) is not supported 00:12:23.675 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:12:23.675 EAL: VFIO support initialized 00:12:23.675 EAL: Ask a virtual area of 0x2e000 bytes 00:12:23.675 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:12:23.675 EAL: Setting up physically contiguous memory... 00:12:23.675 EAL: Setting maximum number of open files to 1048576 00:12:23.675 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:12:23.675 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:12:23.675 EAL: Ask a virtual area of 0x61000 bytes 00:12:23.675 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:12:23.675 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:23.675 EAL: Ask a virtual area of 0x400000000 bytes 00:12:23.675 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:12:23.675 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:12:23.675 EAL: Ask a virtual area of 0x61000 bytes 00:12:23.675 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:12:23.676 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:23.676 EAL: Ask a virtual area of 0x400000000 bytes 00:12:23.676 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:12:23.676 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:12:23.676 EAL: Ask a virtual area of 0x61000 bytes 00:12:23.676 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:12:23.676 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:23.676 EAL: Ask a virtual area of 0x400000000 bytes 00:12:23.676 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:12:23.676 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:12:23.676 EAL: Ask a virtual area of 0x61000 bytes 00:12:23.676 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:12:23.676 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:23.676 EAL: Ask a virtual area of 0x400000000 bytes 00:12:23.676 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:12:23.676 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:12:23.676 EAL: Hugepages will be freed exactly as allocated. 00:12:23.676 EAL: No shared files mode enabled, IPC is disabled 00:12:23.676 EAL: No shared files mode enabled, IPC is disabled 00:12:23.676 EAL: TSC frequency is ~2200000 KHz 00:12:23.676 EAL: Main lcore 0 is ready (tid=7f8a73429a80;cpuset=[0]) 00:12:23.676 EAL: Trying to obtain current memory policy. 00:12:23.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:23.676 EAL: Restoring previous memory policy: 0 00:12:23.676 EAL: request: mp_malloc_sync 00:12:23.676 EAL: No shared files mode enabled, IPC is disabled 00:12:23.676 EAL: Heap on socket 0 was expanded by 2MB 00:12:23.676 EAL: No shared files mode enabled, IPC is disabled 00:12:23.934 EAL: Mem event callback 'spdk:(nil)' registered 00:12:23.934 00:12:23.934 00:12:23.934 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.934 http://cunit.sourceforge.net/ 00:12:23.934 00:12:23.934 00:12:23.934 Suite: components_suite 00:12:24.192 Test: vtophys_malloc_test ...passed 00:12:24.192 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:12:24.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.192 EAL: Restoring previous memory policy: 0 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was expanded by 4MB 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was shrunk by 4MB 00:12:24.192 EAL: Trying to obtain current memory policy. 00:12:24.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.192 EAL: Restoring previous memory policy: 0 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was expanded by 6MB 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was shrunk by 6MB 00:12:24.192 EAL: Trying to obtain current memory policy. 00:12:24.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.192 EAL: Restoring previous memory policy: 0 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was expanded by 10MB 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was shrunk by 10MB 00:12:24.192 EAL: Trying to obtain current memory policy. 00:12:24.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.192 EAL: Restoring previous memory policy: 0 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was expanded by 18MB 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was shrunk by 18MB 00:12:24.192 EAL: Trying to obtain current memory policy. 00:12:24.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.192 EAL: Restoring previous memory policy: 0 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was expanded by 34MB 00:12:24.192 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.192 EAL: request: mp_malloc_sync 00:12:24.192 EAL: No shared files mode enabled, IPC is disabled 00:12:24.192 EAL: Heap on socket 0 was shrunk by 34MB 00:12:24.192 EAL: Trying to obtain current memory policy. 00:12:24.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.450 EAL: Restoring previous memory policy: 0 00:12:24.450 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.450 EAL: request: mp_malloc_sync 00:12:24.450 EAL: No shared files mode enabled, IPC is disabled 00:12:24.450 EAL: Heap on socket 0 was expanded by 66MB 00:12:24.450 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.451 EAL: request: mp_malloc_sync 00:12:24.451 EAL: No shared files mode enabled, IPC is disabled 00:12:24.451 EAL: Heap on socket 0 was shrunk by 66MB 00:12:24.451 EAL: Trying to obtain current memory policy. 00:12:24.451 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.451 EAL: Restoring previous memory policy: 0 00:12:24.451 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.451 EAL: request: mp_malloc_sync 00:12:24.451 EAL: No shared files mode enabled, IPC is disabled 00:12:24.451 EAL: Heap on socket 0 was expanded by 130MB 00:12:24.451 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.451 EAL: request: mp_malloc_sync 00:12:24.451 EAL: No shared files mode enabled, IPC is disabled 00:12:24.451 EAL: Heap on socket 0 was shrunk by 130MB 00:12:24.451 EAL: Trying to obtain current memory policy. 00:12:24.451 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.451 EAL: Restoring previous memory policy: 0 00:12:24.451 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.451 EAL: request: mp_malloc_sync 00:12:24.451 EAL: No shared files mode enabled, IPC is disabled 00:12:24.451 EAL: Heap on socket 0 was expanded by 258MB 00:12:24.451 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.709 EAL: request: mp_malloc_sync 00:12:24.709 EAL: No shared files mode enabled, IPC is disabled 00:12:24.709 EAL: Heap on socket 0 was shrunk by 258MB 00:12:24.709 EAL: Trying to obtain current memory policy. 00:12:24.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:24.709 EAL: Restoring previous memory policy: 0 00:12:24.709 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.709 EAL: request: mp_malloc_sync 00:12:24.709 EAL: No shared files mode enabled, IPC is disabled 00:12:24.709 EAL: Heap on socket 0 was expanded by 514MB 00:12:24.967 EAL: Calling mem event callback 'spdk:(nil)' 00:12:24.967 EAL: request: mp_malloc_sync 00:12:24.967 EAL: No shared files mode enabled, IPC is disabled 00:12:24.967 EAL: Heap on socket 0 was shrunk by 514MB 00:12:24.967 EAL: Trying to obtain current memory policy. 00:12:24.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:25.230 EAL: Restoring previous memory policy: 0 00:12:25.230 EAL: Calling mem event callback 'spdk:(nil)' 00:12:25.230 EAL: request: mp_malloc_sync 00:12:25.230 EAL: No shared files mode enabled, IPC is disabled 00:12:25.230 EAL: Heap on socket 0 was expanded by 1026MB 00:12:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:12:25.752 EAL: request: mp_malloc_sync 00:12:25.752 EAL: No shared files mode enabled, IPC is disabled 00:12:25.752 EAL: Heap on socket 0 was shrunk by 1026MB 00:12:25.752 passed 00:12:25.752 00:12:25.752 Run Summary: Type Total Ran Passed Failed Inactive 00:12:25.752 suites 1 1 n/a 0 0 00:12:25.752 tests 2 2 2 0 0 00:12:25.752 asserts 6303 6303 6303 0 n/a 00:12:25.752 00:12:25.752 Elapsed time = 1.791 seconds 00:12:25.752 EAL: Calling mem event callback 'spdk:(nil)' 00:12:25.752 EAL: request: mp_malloc_sync 00:12:25.752 EAL: No shared files mode enabled, IPC is disabled 00:12:25.752 EAL: Heap on socket 0 was shrunk by 2MB 00:12:25.752 EAL: No shared files mode enabled, IPC is disabled 00:12:25.752 EAL: No shared files mode enabled, IPC is disabled 00:12:25.752 EAL: No shared files mode enabled, IPC is disabled 00:12:25.752 00:12:25.752 real 0m2.038s 00:12:25.752 user 0m1.023s 00:12:25.752 sys 0m0.876s 00:12:25.752 11:56:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:25.752 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:25.752 ************************************ 00:12:25.752 END TEST env_vtophys 00:12:25.752 ************************************ 00:12:25.752 11:56:31 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:25.752 11:56:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:25.752 11:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.752 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:25.752 ************************************ 00:12:25.752 START TEST env_pci 00:12:25.752 ************************************ 00:12:25.752 11:56:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:25.752 00:12:25.752 00:12:25.752 CUnit - A unit testing framework for C - Version 2.1-3 00:12:25.752 http://cunit.sourceforge.net/ 00:12:25.752 00:12:25.752 00:12:25.752 Suite: pci 00:12:25.752 Test: pci_hook ...[2024-11-29 11:56:31.145608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 115351 has claimed it 00:12:25.752 passed 00:12:25.752 00:12:25.752 EAL: Cannot find device (10000:00:01.0) 00:12:25.752 EAL: Failed to attach device on primary process 00:12:25.752 Run Summary: Type Total Ran Passed Failed Inactive 00:12:25.752 suites 1 1 n/a 0 0 00:12:25.752 tests 1 1 1 0 0 00:12:25.752 asserts 25 25 25 0 n/a 00:12:25.752 00:12:25.752 Elapsed time = 0.005 seconds 00:12:25.752 00:12:25.752 real 0m0.069s 00:12:25.752 user 0m0.037s 00:12:25.752 sys 0m0.032s 00:12:25.752 11:56:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:25.752 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:25.752 ************************************ 00:12:25.752 END TEST env_pci 00:12:25.752 ************************************ 00:12:25.752 11:56:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:12:25.752 11:56:31 -- env/env.sh@15 -- # uname 00:12:25.752 11:56:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:12:25.752 11:56:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:12:25.752 11:56:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:25.752 11:56:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:25.752 11:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.752 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:25.752 ************************************ 00:12:25.752 START TEST env_dpdk_post_init 00:12:25.752 ************************************ 00:12:25.752 11:56:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:26.009 EAL: Detected CPU lcores: 10 00:12:26.009 EAL: Detected NUMA nodes: 1 00:12:26.009 EAL: Detected static linkage of DPDK 00:12:26.009 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:26.009 EAL: Selected IOVA mode 'PA' 00:12:26.009 EAL: VFIO support initialized 00:12:26.009 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:26.009 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:12:26.009 Starting DPDK initialization... 00:12:26.009 Starting SPDK post initialization... 00:12:26.009 SPDK NVMe probe 00:12:26.009 Attaching to 0000:00:06.0 00:12:26.009 Attached to 0000:00:06.0 00:12:26.009 Cleaning up... 00:12:26.009 00:12:26.009 real 0m0.231s 00:12:26.009 user 0m0.062s 00:12:26.009 sys 0m0.071s 00:12:26.009 11:56:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.009 ************************************ 00:12:26.009 END TEST env_dpdk_post_init 00:12:26.009 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.009 ************************************ 00:12:26.009 11:56:31 -- env/env.sh@26 -- # uname 00:12:26.268 11:56:31 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:12:26.268 11:56:31 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:26.268 11:56:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:26.268 11:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.268 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.268 ************************************ 00:12:26.268 START TEST env_mem_callbacks 00:12:26.268 ************************************ 00:12:26.268 11:56:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:26.268 EAL: Detected CPU lcores: 10 00:12:26.268 EAL: Detected NUMA nodes: 1 00:12:26.268 EAL: Detected static linkage of DPDK 00:12:26.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:26.268 EAL: Selected IOVA mode 'PA' 00:12:26.268 EAL: VFIO support initialized 00:12:26.268 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:26.268 00:12:26.268 00:12:26.268 CUnit - A unit testing framework for C - Version 2.1-3 00:12:26.268 http://cunit.sourceforge.net/ 00:12:26.268 00:12:26.268 00:12:26.268 Suite: memory 00:12:26.268 Test: test ... 00:12:26.268 register 0x200000200000 2097152 00:12:26.268 malloc 3145728 00:12:26.268 register 0x200000400000 4194304 00:12:26.268 buf 0x200000500000 len 3145728 PASSED 00:12:26.268 malloc 64 00:12:26.268 buf 0x2000004fff40 len 64 PASSED 00:12:26.268 malloc 4194304 00:12:26.268 register 0x200000800000 6291456 00:12:26.268 buf 0x200000a00000 len 4194304 PASSED 00:12:26.268 free 0x200000500000 3145728 00:12:26.268 free 0x2000004fff40 64 00:12:26.268 unregister 0x200000400000 4194304 PASSED 00:12:26.268 free 0x200000a00000 4194304 00:12:26.268 unregister 0x200000800000 6291456 PASSED 00:12:26.268 malloc 8388608 00:12:26.268 register 0x200000400000 10485760 00:12:26.268 buf 0x200000600000 len 8388608 PASSED 00:12:26.268 free 0x200000600000 8388608 00:12:26.268 unregister 0x200000400000 10485760 PASSED 00:12:26.268 passed 00:12:26.268 00:12:26.268 Run Summary: Type Total Ran Passed Failed Inactive 00:12:26.268 suites 1 1 n/a 0 0 00:12:26.268 tests 1 1 1 0 0 00:12:26.268 asserts 15 15 15 0 n/a 00:12:26.268 00:12:26.268 Elapsed time = 0.008 seconds 00:12:26.268 00:12:26.268 real 0m0.195s 00:12:26.268 user 0m0.044s 00:12:26.268 sys 0m0.052s 00:12:26.268 11:56:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.268 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.268 ************************************ 00:12:26.268 END TEST env_mem_callbacks 00:12:26.268 ************************************ 00:12:26.268 00:12:26.268 real 0m3.304s 00:12:26.268 user 0m1.729s 00:12:26.268 sys 0m1.235s 00:12:26.268 11:56:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.268 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.268 ************************************ 00:12:26.268 END TEST env 00:12:26.268 ************************************ 00:12:26.526 11:56:31 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:26.526 11:56:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:26.526 11:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.526 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.526 ************************************ 00:12:26.526 START TEST rpc 00:12:26.526 ************************************ 00:12:26.526 11:56:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:26.526 * Looking for test storage... 00:12:26.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:26.526 11:56:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:26.526 11:56:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:26.526 11:56:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:26.526 11:56:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:26.526 11:56:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:26.526 11:56:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:26.526 11:56:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:26.526 11:56:31 -- scripts/common.sh@335 -- # IFS=.-: 00:12:26.526 11:56:31 -- scripts/common.sh@335 -- # read -ra ver1 00:12:26.526 11:56:31 -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.526 11:56:31 -- scripts/common.sh@336 -- # read -ra ver2 00:12:26.526 11:56:31 -- scripts/common.sh@337 -- # local 'op=<' 00:12:26.526 11:56:31 -- scripts/common.sh@339 -- # ver1_l=2 00:12:26.526 11:56:31 -- scripts/common.sh@340 -- # ver2_l=1 00:12:26.526 11:56:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:26.526 11:56:31 -- scripts/common.sh@343 -- # case "$op" in 00:12:26.526 11:56:31 -- scripts/common.sh@344 -- # : 1 00:12:26.526 11:56:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:26.526 11:56:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.526 11:56:31 -- scripts/common.sh@364 -- # decimal 1 00:12:26.526 11:56:31 -- scripts/common.sh@352 -- # local d=1 00:12:26.526 11:56:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.526 11:56:31 -- scripts/common.sh@354 -- # echo 1 00:12:26.526 11:56:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:26.526 11:56:31 -- scripts/common.sh@365 -- # decimal 2 00:12:26.526 11:56:31 -- scripts/common.sh@352 -- # local d=2 00:12:26.526 11:56:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.526 11:56:31 -- scripts/common.sh@354 -- # echo 2 00:12:26.526 11:56:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:26.526 11:56:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:26.526 11:56:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:26.526 11:56:31 -- scripts/common.sh@367 -- # return 0 00:12:26.526 11:56:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.526 11:56:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:26.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.526 --rc genhtml_branch_coverage=1 00:12:26.526 --rc genhtml_function_coverage=1 00:12:26.526 --rc genhtml_legend=1 00:12:26.526 --rc geninfo_all_blocks=1 00:12:26.526 --rc geninfo_unexecuted_blocks=1 00:12:26.526 00:12:26.526 ' 00:12:26.526 11:56:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:26.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.526 --rc genhtml_branch_coverage=1 00:12:26.526 --rc genhtml_function_coverage=1 00:12:26.526 --rc genhtml_legend=1 00:12:26.526 --rc geninfo_all_blocks=1 00:12:26.526 --rc geninfo_unexecuted_blocks=1 00:12:26.526 00:12:26.526 ' 00:12:26.526 11:56:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:26.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.526 --rc genhtml_branch_coverage=1 00:12:26.526 --rc genhtml_function_coverage=1 00:12:26.526 --rc genhtml_legend=1 00:12:26.526 --rc geninfo_all_blocks=1 00:12:26.526 --rc geninfo_unexecuted_blocks=1 00:12:26.526 00:12:26.526 ' 00:12:26.526 11:56:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:26.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.526 --rc genhtml_branch_coverage=1 00:12:26.526 --rc genhtml_function_coverage=1 00:12:26.526 --rc genhtml_legend=1 00:12:26.526 --rc geninfo_all_blocks=1 00:12:26.526 --rc geninfo_unexecuted_blocks=1 00:12:26.526 00:12:26.526 ' 00:12:26.526 11:56:31 -- rpc/rpc.sh@65 -- # spdk_pid=115489 00:12:26.526 11:56:31 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:12:26.526 11:56:31 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:26.526 11:56:31 -- rpc/rpc.sh@67 -- # waitforlisten 115489 00:12:26.526 11:56:31 -- common/autotest_common.sh@829 -- # '[' -z 115489 ']' 00:12:26.526 11:56:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.526 11:56:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.526 11:56:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.526 11:56:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.526 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.784 [2024-11-29 11:56:32.058875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:26.784 [2024-11-29 11:56:32.059107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115489 ] 00:12:26.784 [2024-11-29 11:56:32.206991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.041 [2024-11-29 11:56:32.314258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:27.041 [2024-11-29 11:56:32.314601] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:12:27.041 [2024-11-29 11:56:32.314676] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 115489' to capture a snapshot of events at runtime. 00:12:27.041 [2024-11-29 11:56:32.314755] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid115489 for offline analysis/debug. 00:12:27.041 [2024-11-29 11:56:32.314868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.608 11:56:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.608 11:56:33 -- common/autotest_common.sh@862 -- # return 0 00:12:27.608 11:56:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:27.608 11:56:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:27.608 11:56:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:12:27.608 11:56:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:12:27.608 11:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:27.608 11:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.608 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.608 ************************************ 00:12:27.608 START TEST rpc_integrity 00:12:27.608 ************************************ 00:12:27.608 11:56:33 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:12:27.608 11:56:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.608 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.608 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.608 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.608 11:56:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:27.608 11:56:33 -- rpc/rpc.sh@13 -- # jq length 00:12:27.866 11:56:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:27.866 11:56:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:27.866 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.866 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.866 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.866 11:56:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:12:27.866 11:56:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:27.866 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.866 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.866 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.866 11:56:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:27.866 { 00:12:27.866 "name": "Malloc0", 00:12:27.866 "aliases": [ 00:12:27.866 "22b7e994-6456-49b5-a5df-24855274acaf" 00:12:27.866 ], 00:12:27.866 "product_name": "Malloc disk", 00:12:27.866 "block_size": 512, 00:12:27.866 "num_blocks": 16384, 00:12:27.866 "uuid": "22b7e994-6456-49b5-a5df-24855274acaf", 00:12:27.866 "assigned_rate_limits": { 00:12:27.866 "rw_ios_per_sec": 0, 00:12:27.866 "rw_mbytes_per_sec": 0, 00:12:27.866 "r_mbytes_per_sec": 0, 00:12:27.866 "w_mbytes_per_sec": 0 00:12:27.866 }, 00:12:27.866 "claimed": false, 00:12:27.866 "zoned": false, 00:12:27.866 "supported_io_types": { 00:12:27.866 "read": true, 00:12:27.866 "write": true, 00:12:27.866 "unmap": true, 00:12:27.866 "write_zeroes": true, 00:12:27.866 "flush": true, 00:12:27.866 "reset": true, 00:12:27.866 "compare": false, 00:12:27.866 "compare_and_write": false, 00:12:27.866 "abort": true, 00:12:27.866 "nvme_admin": false, 00:12:27.866 "nvme_io": false 00:12:27.866 }, 00:12:27.866 "memory_domains": [ 00:12:27.866 { 00:12:27.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.866 "dma_device_type": 2 00:12:27.866 } 00:12:27.866 ], 00:12:27.866 "driver_specific": {} 00:12:27.866 } 00:12:27.866 ]' 00:12:27.866 11:56:33 -- rpc/rpc.sh@17 -- # jq length 00:12:27.866 11:56:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:27.866 11:56:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:12:27.866 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.866 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.866 [2024-11-29 11:56:33.251707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:12:27.866 [2024-11-29 11:56:33.251857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.866 [2024-11-29 11:56:33.251929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:12:27.866 [2024-11-29 11:56:33.251983] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.866 [2024-11-29 11:56:33.255132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.866 [2024-11-29 11:56:33.255221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:27.866 Passthru0 00:12:27.866 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.866 11:56:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:27.866 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.866 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.866 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.866 11:56:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:27.866 { 00:12:27.866 "name": "Malloc0", 00:12:27.866 "aliases": [ 00:12:27.866 "22b7e994-6456-49b5-a5df-24855274acaf" 00:12:27.866 ], 00:12:27.867 "product_name": "Malloc disk", 00:12:27.867 "block_size": 512, 00:12:27.867 "num_blocks": 16384, 00:12:27.867 "uuid": "22b7e994-6456-49b5-a5df-24855274acaf", 00:12:27.867 "assigned_rate_limits": { 00:12:27.867 "rw_ios_per_sec": 0, 00:12:27.867 "rw_mbytes_per_sec": 0, 00:12:27.867 "r_mbytes_per_sec": 0, 00:12:27.867 "w_mbytes_per_sec": 0 00:12:27.867 }, 00:12:27.867 "claimed": true, 00:12:27.867 "claim_type": "exclusive_write", 00:12:27.867 "zoned": false, 00:12:27.867 "supported_io_types": { 00:12:27.867 "read": true, 00:12:27.867 "write": true, 00:12:27.867 "unmap": true, 00:12:27.867 "write_zeroes": true, 00:12:27.867 "flush": true, 00:12:27.867 "reset": true, 00:12:27.867 "compare": false, 00:12:27.867 "compare_and_write": false, 00:12:27.867 "abort": true, 00:12:27.867 "nvme_admin": false, 00:12:27.867 "nvme_io": false 00:12:27.867 }, 00:12:27.867 "memory_domains": [ 00:12:27.867 { 00:12:27.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.867 "dma_device_type": 2 00:12:27.867 } 00:12:27.867 ], 00:12:27.867 "driver_specific": {} 00:12:27.867 }, 00:12:27.867 { 00:12:27.867 "name": "Passthru0", 00:12:27.867 "aliases": [ 00:12:27.867 "cc1919ef-17ab-55b6-a5c5-8f4b5505731a" 00:12:27.867 ], 00:12:27.867 "product_name": "passthru", 00:12:27.867 "block_size": 512, 00:12:27.867 "num_blocks": 16384, 00:12:27.867 "uuid": "cc1919ef-17ab-55b6-a5c5-8f4b5505731a", 00:12:27.867 "assigned_rate_limits": { 00:12:27.867 "rw_ios_per_sec": 0, 00:12:27.867 "rw_mbytes_per_sec": 0, 00:12:27.867 "r_mbytes_per_sec": 0, 00:12:27.867 "w_mbytes_per_sec": 0 00:12:27.867 }, 00:12:27.867 "claimed": false, 00:12:27.867 "zoned": false, 00:12:27.867 "supported_io_types": { 00:12:27.867 "read": true, 00:12:27.867 "write": true, 00:12:27.867 "unmap": true, 00:12:27.867 "write_zeroes": true, 00:12:27.867 "flush": true, 00:12:27.867 "reset": true, 00:12:27.867 "compare": false, 00:12:27.867 "compare_and_write": false, 00:12:27.867 "abort": true, 00:12:27.867 "nvme_admin": false, 00:12:27.867 "nvme_io": false 00:12:27.867 }, 00:12:27.867 "memory_domains": [ 00:12:27.867 { 00:12:27.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.867 "dma_device_type": 2 00:12:27.867 } 00:12:27.867 ], 00:12:27.867 "driver_specific": { 00:12:27.867 "passthru": { 00:12:27.867 "name": "Passthru0", 00:12:27.867 "base_bdev_name": "Malloc0" 00:12:27.867 } 00:12:27.867 } 00:12:27.867 } 00:12:27.867 ]' 00:12:27.867 11:56:33 -- rpc/rpc.sh@21 -- # jq length 00:12:27.867 11:56:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:27.867 11:56:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:27.867 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.867 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.867 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.867 11:56:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:27.867 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.867 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.867 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.867 11:56:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:27.867 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.867 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:27.867 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.867 11:56:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:27.867 11:56:33 -- rpc/rpc.sh@26 -- # jq length 00:12:28.125 11:56:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:28.125 00:12:28.125 real 0m0.326s 00:12:28.126 user 0m0.228s 00:12:28.126 sys 0m0.034s 00:12:28.126 11:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 ************************************ 00:12:28.126 END TEST rpc_integrity 00:12:28.126 ************************************ 00:12:28.126 11:56:33 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:12:28.126 11:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.126 11:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 ************************************ 00:12:28.126 START TEST rpc_plugins 00:12:28.126 ************************************ 00:12:28.126 11:56:33 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:12:28.126 11:56:33 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:12:28.126 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.126 11:56:33 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:12:28.126 11:56:33 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:12:28.126 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.126 11:56:33 -- rpc/rpc.sh@31 -- # bdevs='[ 00:12:28.126 { 00:12:28.126 "name": "Malloc1", 00:12:28.126 "aliases": [ 00:12:28.126 "cd82f957-e3e3-4c03-b9a8-519f36e8347d" 00:12:28.126 ], 00:12:28.126 "product_name": "Malloc disk", 00:12:28.126 "block_size": 4096, 00:12:28.126 "num_blocks": 256, 00:12:28.126 "uuid": "cd82f957-e3e3-4c03-b9a8-519f36e8347d", 00:12:28.126 "assigned_rate_limits": { 00:12:28.126 "rw_ios_per_sec": 0, 00:12:28.126 "rw_mbytes_per_sec": 0, 00:12:28.126 "r_mbytes_per_sec": 0, 00:12:28.126 "w_mbytes_per_sec": 0 00:12:28.126 }, 00:12:28.126 "claimed": false, 00:12:28.126 "zoned": false, 00:12:28.126 "supported_io_types": { 00:12:28.126 "read": true, 00:12:28.126 "write": true, 00:12:28.126 "unmap": true, 00:12:28.126 "write_zeroes": true, 00:12:28.126 "flush": true, 00:12:28.126 "reset": true, 00:12:28.126 "compare": false, 00:12:28.126 "compare_and_write": false, 00:12:28.126 "abort": true, 00:12:28.126 "nvme_admin": false, 00:12:28.126 "nvme_io": false 00:12:28.126 }, 00:12:28.126 "memory_domains": [ 00:12:28.126 { 00:12:28.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.126 "dma_device_type": 2 00:12:28.126 } 00:12:28.126 ], 00:12:28.126 "driver_specific": {} 00:12:28.126 } 00:12:28.126 ]' 00:12:28.126 11:56:33 -- rpc/rpc.sh@32 -- # jq length 00:12:28.126 11:56:33 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:12:28.126 11:56:33 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:12:28.126 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.126 11:56:33 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:12:28.126 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.126 11:56:33 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:12:28.126 11:56:33 -- rpc/rpc.sh@36 -- # jq length 00:12:28.126 11:56:33 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:12:28.126 00:12:28.126 real 0m0.146s 00:12:28.126 user 0m0.090s 00:12:28.126 sys 0m0.022s 00:12:28.126 11:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.126 ************************************ 00:12:28.126 END TEST rpc_plugins 00:12:28.126 ************************************ 00:12:28.126 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.384 11:56:33 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:12:28.384 11:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.385 11:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.385 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 ************************************ 00:12:28.385 START TEST rpc_trace_cmd_test 00:12:28.385 ************************************ 00:12:28.385 11:56:33 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:12:28.385 11:56:33 -- rpc/rpc.sh@40 -- # local info 00:12:28.385 11:56:33 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:12:28.385 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.385 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.385 11:56:33 -- rpc/rpc.sh@42 -- # info='{ 00:12:28.385 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid115489", 00:12:28.385 "tpoint_group_mask": "0x8", 00:12:28.385 "iscsi_conn": { 00:12:28.385 "mask": "0x2", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "scsi": { 00:12:28.385 "mask": "0x4", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "bdev": { 00:12:28.385 "mask": "0x8", 00:12:28.385 "tpoint_mask": "0xffffffffffffffff" 00:12:28.385 }, 00:12:28.385 "nvmf_rdma": { 00:12:28.385 "mask": "0x10", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "nvmf_tcp": { 00:12:28.385 "mask": "0x20", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "ftl": { 00:12:28.385 "mask": "0x40", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "blobfs": { 00:12:28.385 "mask": "0x80", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "dsa": { 00:12:28.385 "mask": "0x200", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "thread": { 00:12:28.385 "mask": "0x400", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "nvme_pcie": { 00:12:28.385 "mask": "0x800", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "iaa": { 00:12:28.385 "mask": "0x1000", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "nvme_tcp": { 00:12:28.385 "mask": "0x2000", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 }, 00:12:28.385 "bdev_nvme": { 00:12:28.385 "mask": "0x4000", 00:12:28.385 "tpoint_mask": "0x0" 00:12:28.385 } 00:12:28.385 }' 00:12:28.385 11:56:33 -- rpc/rpc.sh@43 -- # jq length 00:12:28.385 11:56:33 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:12:28.385 11:56:33 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:12:28.385 11:56:33 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:12:28.385 11:56:33 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:12:28.385 11:56:33 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:12:28.385 11:56:33 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:12:28.385 11:56:33 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:12:28.385 11:56:33 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:12:28.644 11:56:33 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:12:28.644 00:12:28.644 real 0m0.275s 00:12:28.644 user 0m0.250s 00:12:28.644 sys 0m0.018s 00:12:28.644 ************************************ 00:12:28.644 END TEST rpc_trace_cmd_test 00:12:28.644 ************************************ 00:12:28.644 11:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.644 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.644 11:56:33 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:12:28.644 11:56:33 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:12:28.644 11:56:33 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:12:28.644 11:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.644 11:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.644 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.644 ************************************ 00:12:28.644 START TEST rpc_daemon_integrity 00:12:28.644 ************************************ 00:12:28.644 11:56:33 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:12:28.644 11:56:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.644 11:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.644 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.644 11:56:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.644 11:56:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:28.644 11:56:33 -- rpc/rpc.sh@13 -- # jq length 00:12:28.644 11:56:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:28.644 11:56:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:28.644 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.644 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.644 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.644 11:56:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:12:28.644 11:56:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:28.644 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.644 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.644 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.644 11:56:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:28.644 { 00:12:28.644 "name": "Malloc2", 00:12:28.644 "aliases": [ 00:12:28.644 "e0ba7178-40d7-4978-98e5-0aef6c8e4b7f" 00:12:28.644 ], 00:12:28.644 "product_name": "Malloc disk", 00:12:28.644 "block_size": 512, 00:12:28.645 "num_blocks": 16384, 00:12:28.645 "uuid": "e0ba7178-40d7-4978-98e5-0aef6c8e4b7f", 00:12:28.645 "assigned_rate_limits": { 00:12:28.645 "rw_ios_per_sec": 0, 00:12:28.645 "rw_mbytes_per_sec": 0, 00:12:28.645 "r_mbytes_per_sec": 0, 00:12:28.645 "w_mbytes_per_sec": 0 00:12:28.645 }, 00:12:28.645 "claimed": false, 00:12:28.645 "zoned": false, 00:12:28.645 "supported_io_types": { 00:12:28.645 "read": true, 00:12:28.645 "write": true, 00:12:28.645 "unmap": true, 00:12:28.645 "write_zeroes": true, 00:12:28.645 "flush": true, 00:12:28.645 "reset": true, 00:12:28.645 "compare": false, 00:12:28.645 "compare_and_write": false, 00:12:28.645 "abort": true, 00:12:28.645 "nvme_admin": false, 00:12:28.645 "nvme_io": false 00:12:28.645 }, 00:12:28.645 "memory_domains": [ 00:12:28.645 { 00:12:28.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.645 "dma_device_type": 2 00:12:28.645 } 00:12:28.645 ], 00:12:28.645 "driver_specific": {} 00:12:28.645 } 00:12:28.645 ]' 00:12:28.645 11:56:34 -- rpc/rpc.sh@17 -- # jq length 00:12:28.645 11:56:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:28.645 11:56:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:12:28.645 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.645 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.645 [2024-11-29 11:56:34.125485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:12:28.645 [2024-11-29 11:56:34.125608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.645 [2024-11-29 11:56:34.125670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:28.645 [2024-11-29 11:56:34.125696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.645 [2024-11-29 11:56:34.128388] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.645 [2024-11-29 11:56:34.128481] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:28.645 Passthru0 00:12:28.645 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.645 11:56:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:28.645 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.645 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.645 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.645 11:56:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:28.645 { 00:12:28.645 "name": "Malloc2", 00:12:28.645 "aliases": [ 00:12:28.645 "e0ba7178-40d7-4978-98e5-0aef6c8e4b7f" 00:12:28.645 ], 00:12:28.645 "product_name": "Malloc disk", 00:12:28.645 "block_size": 512, 00:12:28.645 "num_blocks": 16384, 00:12:28.645 "uuid": "e0ba7178-40d7-4978-98e5-0aef6c8e4b7f", 00:12:28.645 "assigned_rate_limits": { 00:12:28.645 "rw_ios_per_sec": 0, 00:12:28.645 "rw_mbytes_per_sec": 0, 00:12:28.645 "r_mbytes_per_sec": 0, 00:12:28.645 "w_mbytes_per_sec": 0 00:12:28.645 }, 00:12:28.645 "claimed": true, 00:12:28.645 "claim_type": "exclusive_write", 00:12:28.645 "zoned": false, 00:12:28.645 "supported_io_types": { 00:12:28.645 "read": true, 00:12:28.645 "write": true, 00:12:28.645 "unmap": true, 00:12:28.645 "write_zeroes": true, 00:12:28.645 "flush": true, 00:12:28.645 "reset": true, 00:12:28.645 "compare": false, 00:12:28.645 "compare_and_write": false, 00:12:28.645 "abort": true, 00:12:28.645 "nvme_admin": false, 00:12:28.645 "nvme_io": false 00:12:28.645 }, 00:12:28.645 "memory_domains": [ 00:12:28.645 { 00:12:28.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.645 "dma_device_type": 2 00:12:28.645 } 00:12:28.645 ], 00:12:28.645 "driver_specific": {} 00:12:28.645 }, 00:12:28.645 { 00:12:28.645 "name": "Passthru0", 00:12:28.645 "aliases": [ 00:12:28.645 "e11b70ec-0a7b-541a-9df7-b112ca862d09" 00:12:28.645 ], 00:12:28.645 "product_name": "passthru", 00:12:28.645 "block_size": 512, 00:12:28.645 "num_blocks": 16384, 00:12:28.645 "uuid": "e11b70ec-0a7b-541a-9df7-b112ca862d09", 00:12:28.645 "assigned_rate_limits": { 00:12:28.645 "rw_ios_per_sec": 0, 00:12:28.645 "rw_mbytes_per_sec": 0, 00:12:28.645 "r_mbytes_per_sec": 0, 00:12:28.645 "w_mbytes_per_sec": 0 00:12:28.645 }, 00:12:28.645 "claimed": false, 00:12:28.645 "zoned": false, 00:12:28.645 "supported_io_types": { 00:12:28.645 "read": true, 00:12:28.645 "write": true, 00:12:28.645 "unmap": true, 00:12:28.645 "write_zeroes": true, 00:12:28.645 "flush": true, 00:12:28.645 "reset": true, 00:12:28.645 "compare": false, 00:12:28.645 "compare_and_write": false, 00:12:28.645 "abort": true, 00:12:28.645 "nvme_admin": false, 00:12:28.645 "nvme_io": false 00:12:28.645 }, 00:12:28.645 "memory_domains": [ 00:12:28.645 { 00:12:28.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.645 "dma_device_type": 2 00:12:28.645 } 00:12:28.645 ], 00:12:28.645 "driver_specific": { 00:12:28.645 "passthru": { 00:12:28.645 "name": "Passthru0", 00:12:28.645 "base_bdev_name": "Malloc2" 00:12:28.645 } 00:12:28.645 } 00:12:28.645 } 00:12:28.645 ]' 00:12:28.645 11:56:34 -- rpc/rpc.sh@21 -- # jq length 00:12:28.903 11:56:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:28.903 11:56:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:28.903 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.903 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.903 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.903 11:56:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:28.903 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.903 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.903 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.903 11:56:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:28.903 11:56:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.903 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.903 11:56:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.903 11:56:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:28.903 11:56:34 -- rpc/rpc.sh@26 -- # jq length 00:12:28.903 11:56:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:28.903 00:12:28.903 real 0m0.295s 00:12:28.903 user 0m0.207s 00:12:28.903 sys 0m0.023s 00:12:28.903 11:56:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.903 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.903 ************************************ 00:12:28.903 END TEST rpc_daemon_integrity 00:12:28.903 ************************************ 00:12:28.903 11:56:34 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:12:28.903 11:56:34 -- rpc/rpc.sh@84 -- # killprocess 115489 00:12:28.903 11:56:34 -- common/autotest_common.sh@936 -- # '[' -z 115489 ']' 00:12:28.903 11:56:34 -- common/autotest_common.sh@940 -- # kill -0 115489 00:12:28.903 11:56:34 -- common/autotest_common.sh@941 -- # uname 00:12:28.903 11:56:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.903 11:56:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115489 00:12:28.903 11:56:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:28.903 11:56:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:28.903 11:56:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115489' 00:12:28.903 killing process with pid 115489 00:12:28.904 11:56:34 -- common/autotest_common.sh@955 -- # kill 115489 00:12:28.904 11:56:34 -- common/autotest_common.sh@960 -- # wait 115489 00:12:29.489 ************************************ 00:12:29.489 END TEST rpc 00:12:29.489 ************************************ 00:12:29.489 00:12:29.489 real 0m2.967s 00:12:29.489 user 0m3.866s 00:12:29.489 sys 0m0.682s 00:12:29.489 11:56:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.489 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:29.489 11:56:34 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:29.489 11:56:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:29.489 11:56:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.489 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:29.489 ************************************ 00:12:29.489 START TEST rpc_client 00:12:29.489 ************************************ 00:12:29.489 11:56:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:29.489 * Looking for test storage... 00:12:29.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:12:29.489 11:56:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:29.489 11:56:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:29.489 11:56:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:29.789 11:56:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:29.789 11:56:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:29.789 11:56:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:29.789 11:56:35 -- scripts/common.sh@335 -- # IFS=.-: 00:12:29.789 11:56:35 -- scripts/common.sh@335 -- # read -ra ver1 00:12:29.789 11:56:35 -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.789 11:56:35 -- scripts/common.sh@336 -- # read -ra ver2 00:12:29.789 11:56:35 -- scripts/common.sh@337 -- # local 'op=<' 00:12:29.789 11:56:35 -- scripts/common.sh@339 -- # ver1_l=2 00:12:29.789 11:56:35 -- scripts/common.sh@340 -- # ver2_l=1 00:12:29.789 11:56:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:29.789 11:56:35 -- scripts/common.sh@343 -- # case "$op" in 00:12:29.789 11:56:35 -- scripts/common.sh@344 -- # : 1 00:12:29.789 11:56:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:29.789 11:56:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.789 11:56:35 -- scripts/common.sh@364 -- # decimal 1 00:12:29.789 11:56:35 -- scripts/common.sh@352 -- # local d=1 00:12:29.789 11:56:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.789 11:56:35 -- scripts/common.sh@354 -- # echo 1 00:12:29.789 11:56:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:29.789 11:56:35 -- scripts/common.sh@365 -- # decimal 2 00:12:29.789 11:56:35 -- scripts/common.sh@352 -- # local d=2 00:12:29.789 11:56:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.789 11:56:35 -- scripts/common.sh@354 -- # echo 2 00:12:29.789 11:56:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:29.789 11:56:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:29.789 11:56:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:29.789 11:56:35 -- scripts/common.sh@367 -- # return 0 00:12:29.789 11:56:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:29.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.789 --rc genhtml_branch_coverage=1 00:12:29.789 --rc genhtml_function_coverage=1 00:12:29.789 --rc genhtml_legend=1 00:12:29.789 --rc geninfo_all_blocks=1 00:12:29.789 --rc geninfo_unexecuted_blocks=1 00:12:29.789 00:12:29.789 ' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:29.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.789 --rc genhtml_branch_coverage=1 00:12:29.789 --rc genhtml_function_coverage=1 00:12:29.789 --rc genhtml_legend=1 00:12:29.789 --rc geninfo_all_blocks=1 00:12:29.789 --rc geninfo_unexecuted_blocks=1 00:12:29.789 00:12:29.789 ' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:29.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.789 --rc genhtml_branch_coverage=1 00:12:29.789 --rc genhtml_function_coverage=1 00:12:29.789 --rc genhtml_legend=1 00:12:29.789 --rc geninfo_all_blocks=1 00:12:29.789 --rc geninfo_unexecuted_blocks=1 00:12:29.789 00:12:29.789 ' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:29.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.789 --rc genhtml_branch_coverage=1 00:12:29.789 --rc genhtml_function_coverage=1 00:12:29.789 --rc genhtml_legend=1 00:12:29.789 --rc geninfo_all_blocks=1 00:12:29.789 --rc geninfo_unexecuted_blocks=1 00:12:29.789 00:12:29.789 ' 00:12:29.789 11:56:35 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:12:29.789 OK 00:12:29.789 11:56:35 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:29.789 00:12:29.789 real 0m0.254s 00:12:29.789 user 0m0.193s 00:12:29.789 sys 0m0.079s 00:12:29.789 11:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.789 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:12:29.789 ************************************ 00:12:29.789 END TEST rpc_client 00:12:29.789 ************************************ 00:12:29.789 11:56:35 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:29.789 11:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.789 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:12:29.789 ************************************ 00:12:29.789 START TEST json_config 00:12:29.789 ************************************ 00:12:29.789 11:56:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:29.789 11:56:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:29.789 11:56:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:29.789 11:56:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:29.789 11:56:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:29.789 11:56:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:29.789 11:56:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:29.789 11:56:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:29.789 11:56:35 -- scripts/common.sh@335 -- # IFS=.-: 00:12:29.790 11:56:35 -- scripts/common.sh@335 -- # read -ra ver1 00:12:29.790 11:56:35 -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.790 11:56:35 -- scripts/common.sh@336 -- # read -ra ver2 00:12:29.790 11:56:35 -- scripts/common.sh@337 -- # local 'op=<' 00:12:29.790 11:56:35 -- scripts/common.sh@339 -- # ver1_l=2 00:12:29.790 11:56:35 -- scripts/common.sh@340 -- # ver2_l=1 00:12:29.790 11:56:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:29.790 11:56:35 -- scripts/common.sh@343 -- # case "$op" in 00:12:29.790 11:56:35 -- scripts/common.sh@344 -- # : 1 00:12:29.790 11:56:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:29.790 11:56:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.790 11:56:35 -- scripts/common.sh@364 -- # decimal 1 00:12:29.790 11:56:35 -- scripts/common.sh@352 -- # local d=1 00:12:29.790 11:56:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.790 11:56:35 -- scripts/common.sh@354 -- # echo 1 00:12:29.790 11:56:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:29.790 11:56:35 -- scripts/common.sh@365 -- # decimal 2 00:12:29.790 11:56:35 -- scripts/common.sh@352 -- # local d=2 00:12:29.790 11:56:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.790 11:56:35 -- scripts/common.sh@354 -- # echo 2 00:12:29.790 11:56:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:29.790 11:56:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:29.790 11:56:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:29.790 11:56:35 -- scripts/common.sh@367 -- # return 0 00:12:29.790 11:56:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.790 11:56:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.790 --rc genhtml_branch_coverage=1 00:12:29.790 --rc genhtml_function_coverage=1 00:12:29.790 --rc genhtml_legend=1 00:12:29.790 --rc geninfo_all_blocks=1 00:12:29.790 --rc geninfo_unexecuted_blocks=1 00:12:29.790 00:12:29.790 ' 00:12:29.790 11:56:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.790 --rc genhtml_branch_coverage=1 00:12:29.790 --rc genhtml_function_coverage=1 00:12:29.790 --rc genhtml_legend=1 00:12:29.790 --rc geninfo_all_blocks=1 00:12:29.790 --rc geninfo_unexecuted_blocks=1 00:12:29.790 00:12:29.790 ' 00:12:29.790 11:56:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.790 --rc genhtml_branch_coverage=1 00:12:29.790 --rc genhtml_function_coverage=1 00:12:29.790 --rc genhtml_legend=1 00:12:29.790 --rc geninfo_all_blocks=1 00:12:29.790 --rc geninfo_unexecuted_blocks=1 00:12:29.790 00:12:29.790 ' 00:12:29.790 11:56:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.790 --rc genhtml_branch_coverage=1 00:12:29.790 --rc genhtml_function_coverage=1 00:12:29.790 --rc genhtml_legend=1 00:12:29.790 --rc geninfo_all_blocks=1 00:12:29.790 --rc geninfo_unexecuted_blocks=1 00:12:29.790 00:12:29.790 ' 00:12:29.790 11:56:35 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.790 11:56:35 -- nvmf/common.sh@7 -- # uname -s 00:12:29.790 11:56:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.790 11:56:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.790 11:56:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.790 11:56:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.790 11:56:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.790 11:56:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.790 11:56:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.790 11:56:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.790 11:56:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.047 11:56:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.047 11:56:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6d30e243-a3fc-465e-a5aa-fb6308362802 00:12:30.047 11:56:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=6d30e243-a3fc-465e-a5aa-fb6308362802 00:12:30.047 11:56:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.047 11:56:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.047 11:56:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:30.047 11:56:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.047 11:56:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.047 11:56:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.047 11:56:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.047 11:56:35 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:30.047 11:56:35 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:30.047 11:56:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:30.047 11:56:35 -- paths/export.sh@5 -- # export PATH 00:12:30.047 11:56:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:30.047 11:56:35 -- nvmf/common.sh@46 -- # : 0 00:12:30.047 11:56:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:30.047 11:56:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:30.047 11:56:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:30.047 11:56:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.047 11:56:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.047 11:56:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:30.047 11:56:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:30.047 11:56:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:30.047 11:56:35 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:30.047 11:56:35 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:12:30.047 11:56:35 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:12:30.047 11:56:35 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:12:30.047 11:56:35 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:12:30.047 11:56:35 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:12:30.047 11:56:35 -- json_config/json_config.sh@32 -- # declare -A app_params 00:12:30.047 11:56:35 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:12:30.047 11:56:35 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:12:30.047 11:56:35 -- json_config/json_config.sh@43 -- # last_event_id=0 00:12:30.047 11:56:35 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:30.047 11:56:35 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:12:30.047 INFO: JSON configuration test init 00:12:30.047 11:56:35 -- json_config/json_config.sh@420 -- # json_config_test_init 00:12:30.047 11:56:35 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:12:30.047 11:56:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.047 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.047 11:56:35 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:12:30.047 11:56:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.047 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.047 11:56:35 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:12:30.047 11:56:35 -- json_config/json_config.sh@98 -- # local app=target 00:12:30.047 11:56:35 -- json_config/json_config.sh@99 -- # shift 00:12:30.047 11:56:35 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:12:30.047 11:56:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:12:30.047 11:56:35 -- json_config/json_config.sh@111 -- # app_pid[$app]=115773 00:12:30.047 11:56:35 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:12:30.047 11:56:35 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:12:30.047 Waiting for target to run... 00:12:30.048 11:56:35 -- json_config/json_config.sh@114 -- # waitforlisten 115773 /var/tmp/spdk_tgt.sock 00:12:30.048 11:56:35 -- common/autotest_common.sh@829 -- # '[' -z 115773 ']' 00:12:30.048 11:56:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:30.048 11:56:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.048 11:56:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:30.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:30.048 11:56:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.048 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.048 [2024-11-29 11:56:35.399555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:30.048 [2024-11-29 11:56:35.399760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115773 ] 00:12:30.612 [2024-11-29 11:56:35.829011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.612 [2024-11-29 11:56:35.901122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:30.612 [2024-11-29 11:56:35.901515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.870 00:12:30.870 11:56:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.870 11:56:36 -- common/autotest_common.sh@862 -- # return 0 00:12:30.870 11:56:36 -- json_config/json_config.sh@115 -- # echo '' 00:12:30.870 11:56:36 -- json_config/json_config.sh@322 -- # create_accel_config 00:12:30.870 11:56:36 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:12:30.870 11:56:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.870 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:30.870 11:56:36 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:12:30.870 11:56:36 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:12:30.870 11:56:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.870 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:30.870 11:56:36 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:12:30.870 11:56:36 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:12:30.870 11:56:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:12:31.436 11:56:36 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:12:31.436 11:56:36 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:12:31.436 11:56:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.436 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:31.436 11:56:36 -- json_config/json_config.sh@48 -- # local ret=0 00:12:31.436 11:56:36 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:12:31.436 11:56:36 -- json_config/json_config.sh@49 -- # local enabled_types 00:12:31.436 11:56:36 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:12:31.436 11:56:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:12:31.436 11:56:36 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:12:31.694 11:56:36 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:12:31.694 11:56:36 -- json_config/json_config.sh@51 -- # local get_types 00:12:31.694 11:56:36 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:12:31.694 11:56:36 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:12:31.694 11:56:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.694 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:31.694 11:56:37 -- json_config/json_config.sh@58 -- # return 0 00:12:31.694 11:56:37 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:12:31.694 11:56:37 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:12:31.694 11:56:37 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:12:31.694 11:56:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.694 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:12:31.694 11:56:37 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:12:31.694 11:56:37 -- json_config/json_config.sh@160 -- # local expected_notifications 00:12:31.694 11:56:37 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:12:31.694 11:56:37 -- json_config/json_config.sh@164 -- # get_notifications 00:12:31.694 11:56:37 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:12:31.694 11:56:37 -- json_config/json_config.sh@64 -- # IFS=: 00:12:31.694 11:56:37 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:31.694 11:56:37 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:12:31.694 11:56:37 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:12:31.694 11:56:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:12:31.952 11:56:37 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:12:31.952 11:56:37 -- json_config/json_config.sh@64 -- # IFS=: 00:12:31.952 11:56:37 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:31.952 11:56:37 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:12:31.952 11:56:37 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:12:31.952 11:56:37 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:12:31.952 11:56:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:12:32.211 Nvme0n1p0 Nvme0n1p1 00:12:32.212 11:56:37 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:12:32.212 11:56:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:12:32.470 [2024-11-29 11:56:37.785274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:12:32.470 [2024-11-29 11:56:37.785426] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:12:32.470 00:12:32.470 11:56:37 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:12:32.470 11:56:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:12:32.728 Malloc3 00:12:32.728 11:56:38 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:12:32.728 11:56:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:12:32.987 [2024-11-29 11:56:38.265683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:32.987 [2024-11-29 11:56:38.265828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.987 [2024-11-29 11:56:38.265880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:12:32.987 [2024-11-29 11:56:38.265919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.987 [2024-11-29 11:56:38.268766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.987 [2024-11-29 11:56:38.268837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:12:32.987 PTBdevFromMalloc3 00:12:32.987 11:56:38 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:12:32.987 11:56:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:12:33.245 Null0 00:12:33.245 11:56:38 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:12:33.245 11:56:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:12:33.245 Malloc0 00:12:33.245 11:56:38 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:12:33.245 11:56:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:12:33.504 Malloc1 00:12:33.504 11:56:38 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:12:33.504 11:56:38 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:12:34.070 102400+0 records in 00:12:34.070 102400+0 records out 00:12:34.070 104857600 bytes (105 MB, 100 MiB) copied, 0.359483 s, 292 MB/s 00:12:34.070 11:56:39 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:12:34.070 11:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:12:34.359 aio_disk 00:12:34.359 11:56:39 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:12:34.359 11:56:39 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:12:34.359 11:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:12:34.617 4fe1ac37-dbe0-479a-b5bd-e937cd48596b 00:12:34.617 11:56:39 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:12:34.617 11:56:39 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:12:34.617 11:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:12:34.876 11:56:40 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:12:34.876 11:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:12:35.135 11:56:40 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:12:35.135 11:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:12:35.393 11:56:40 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:12:35.393 11:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:12:35.651 11:56:41 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:12:35.651 11:56:41 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:12:35.651 11:56:41 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a96aa315-261d-432a-ab0c-020e4b3bde4d bdev_register:bbefb50d-387a-4253-8dda-e7bd41cfa21f bdev_register:77001ea8-b12a-409f-a23c-0b7714fae50e bdev_register:fbe35b2c-9a3d-40c9-acf5-28257a1423c0 00:12:35.651 11:56:41 -- json_config/json_config.sh@70 -- # local events_to_check 00:12:35.651 11:56:41 -- json_config/json_config.sh@71 -- # local recorded_events 00:12:35.651 11:56:41 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:12:35.651 11:56:41 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a96aa315-261d-432a-ab0c-020e4b3bde4d bdev_register:bbefb50d-387a-4253-8dda-e7bd41cfa21f bdev_register:77001ea8-b12a-409f-a23c-0b7714fae50e bdev_register:fbe35b2c-9a3d-40c9-acf5-28257a1423c0 00:12:35.651 11:56:41 -- json_config/json_config.sh@74 -- # sort 00:12:35.651 11:56:41 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:12:35.651 11:56:41 -- json_config/json_config.sh@75 -- # get_notifications 00:12:35.651 11:56:41 -- json_config/json_config.sh@75 -- # sort 00:12:35.651 11:56:41 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:12:35.651 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.651 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.651 11:56:41 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:12:35.651 11:56:41 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:12:35.651 11:56:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.909 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.909 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.910 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.910 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:a96aa315-261d-432a-ab0c-020e4b3bde4d 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.910 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:bbefb50d-387a-4253-8dda-e7bd41cfa21f 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.910 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:77001ea8-b12a-409f-a23c-0b7714fae50e 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.910 11:56:41 -- json_config/json_config.sh@65 -- # echo bdev_register:fbe35b2c-9a3d-40c9-acf5-28257a1423c0 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # IFS=: 00:12:35.910 11:56:41 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:12:35.910 11:56:41 -- json_config/json_config.sh@77 -- # [[ bdev_register:77001ea8-b12a-409f-a23c-0b7714fae50e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a96aa315-261d-432a-ab0c-020e4b3bde4d bdev_register:aio_disk bdev_register:bbefb50d-387a-4253-8dda-e7bd41cfa21f bdev_register:fbe35b2c-9a3d-40c9-acf5-28257a1423c0 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\7\0\0\1\e\a\8\-\b\1\2\a\-\4\0\9\f\-\a\2\3\c\-\0\b\7\7\1\4\f\a\e\5\0\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\9\6\a\a\3\1\5\-\2\6\1\d\-\4\3\2\a\-\a\b\0\c\-\0\2\0\e\4\b\3\b\d\e\4\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\b\e\f\b\5\0\d\-\3\8\7\a\-\4\2\5\3\-\8\d\d\a\-\e\7\b\d\4\1\c\f\a\2\1\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\b\e\3\5\b\2\c\-\9\a\3\d\-\4\0\c\9\-\a\c\f\5\-\2\8\2\5\7\a\1\4\2\3\c\0 ]] 00:12:35.910 11:56:41 -- json_config/json_config.sh@89 -- # cat 00:12:35.910 11:56:41 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:77001ea8-b12a-409f-a23c-0b7714fae50e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a96aa315-261d-432a-ab0c-020e4b3bde4d bdev_register:aio_disk bdev_register:bbefb50d-387a-4253-8dda-e7bd41cfa21f bdev_register:fbe35b2c-9a3d-40c9-acf5-28257a1423c0 00:12:35.910 Expected events matched: 00:12:35.910 bdev_register:77001ea8-b12a-409f-a23c-0b7714fae50e 00:12:35.910 bdev_register:Malloc0 00:12:35.910 bdev_register:Malloc0p0 00:12:35.910 bdev_register:Malloc0p1 00:12:35.910 bdev_register:Malloc0p2 00:12:35.910 bdev_register:Malloc1 00:12:35.910 bdev_register:Malloc3 00:12:35.910 bdev_register:Null0 00:12:35.910 bdev_register:Nvme0n1 00:12:35.910 bdev_register:Nvme0n1p0 00:12:35.910 bdev_register:Nvme0n1p1 00:12:35.910 bdev_register:PTBdevFromMalloc3 00:12:35.910 bdev_register:a96aa315-261d-432a-ab0c-020e4b3bde4d 00:12:35.910 bdev_register:aio_disk 00:12:35.910 bdev_register:bbefb50d-387a-4253-8dda-e7bd41cfa21f 00:12:35.910 bdev_register:fbe35b2c-9a3d-40c9-acf5-28257a1423c0 00:12:35.910 11:56:41 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:12:35.910 11:56:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.910 11:56:41 -- common/autotest_common.sh@10 -- # set +x 00:12:35.910 11:56:41 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:12:35.910 11:56:41 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:12:35.910 11:56:41 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:12:35.910 11:56:41 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:12:35.910 11:56:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.910 11:56:41 -- common/autotest_common.sh@10 -- # set +x 00:12:36.168 11:56:41 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:12:36.168 11:56:41 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:36.168 11:56:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:36.426 MallocBdevForConfigChangeCheck 00:12:36.426 11:56:41 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:12:36.426 11:56:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.426 11:56:41 -- common/autotest_common.sh@10 -- # set +x 00:12:36.426 11:56:41 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:12:36.426 11:56:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:36.685 INFO: shutting down applications... 00:12:36.685 11:56:42 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:12:36.685 11:56:42 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:12:36.685 11:56:42 -- json_config/json_config.sh@431 -- # json_config_clear target 00:12:36.685 11:56:42 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:12:36.685 11:56:42 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:12:36.943 [2024-11-29 11:56:42.337105] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:12:37.203 Calling clear_vhost_scsi_subsystem 00:12:37.203 Calling clear_iscsi_subsystem 00:12:37.203 Calling clear_vhost_blk_subsystem 00:12:37.203 Calling clear_nbd_subsystem 00:12:37.203 Calling clear_nvmf_subsystem 00:12:37.203 Calling clear_bdev_subsystem 00:12:37.203 Calling clear_accel_subsystem 00:12:37.203 Calling clear_iobuf_subsystem 00:12:37.203 Calling clear_sock_subsystem 00:12:37.203 Calling clear_vmd_subsystem 00:12:37.203 Calling clear_scheduler_subsystem 00:12:37.203 11:56:42 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:12:37.203 11:56:42 -- json_config/json_config.sh@396 -- # count=100 00:12:37.203 11:56:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:12:37.203 11:56:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:37.203 11:56:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:37.203 11:56:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:12:37.462 11:56:42 -- json_config/json_config.sh@398 -- # break 00:12:37.462 11:56:42 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:12:37.462 11:56:42 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:12:37.462 11:56:42 -- json_config/json_config.sh@120 -- # local app=target 00:12:37.462 11:56:42 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:12:37.462 11:56:42 -- json_config/json_config.sh@124 -- # [[ -n 115773 ]] 00:12:37.462 11:56:42 -- json_config/json_config.sh@127 -- # kill -SIGINT 115773 00:12:37.462 11:56:42 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:12:37.462 11:56:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:12:37.462 11:56:42 -- json_config/json_config.sh@130 -- # kill -0 115773 00:12:37.462 11:56:42 -- json_config/json_config.sh@134 -- # sleep 0.5 00:12:38.031 SPDK target shutdown done 00:12:38.031 INFO: relaunching applications... 00:12:38.031 11:56:43 -- json_config/json_config.sh@129 -- # (( i++ )) 00:12:38.031 11:56:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:12:38.031 11:56:43 -- json_config/json_config.sh@130 -- # kill -0 115773 00:12:38.031 11:56:43 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:12:38.031 11:56:43 -- json_config/json_config.sh@132 -- # break 00:12:38.031 11:56:43 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:12:38.031 11:56:43 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:12:38.031 11:56:43 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:12:38.031 11:56:43 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:38.031 11:56:43 -- json_config/json_config.sh@98 -- # local app=target 00:12:38.031 11:56:43 -- json_config/json_config.sh@99 -- # shift 00:12:38.031 11:56:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:12:38.031 11:56:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:12:38.031 11:56:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:12:38.031 11:56:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:12:38.031 11:56:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:12:38.031 11:56:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=116031 00:12:38.031 Waiting for target to run... 00:12:38.031 11:56:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:12:38.031 11:56:43 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:38.031 11:56:43 -- json_config/json_config.sh@114 -- # waitforlisten 116031 /var/tmp/spdk_tgt.sock 00:12:38.031 11:56:43 -- common/autotest_common.sh@829 -- # '[' -z 116031 ']' 00:12:38.031 11:56:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:38.031 11:56:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:38.031 11:56:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:38.031 11:56:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.031 11:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.031 [2024-11-29 11:56:43.446090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:38.032 [2024-11-29 11:56:43.446507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116031 ] 00:12:38.600 [2024-11-29 11:56:43.906070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.600 [2024-11-29 11:56:43.972879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.600 [2024-11-29 11:56:43.973147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.859 [2024-11-29 11:56:44.127305] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:12:38.859 [2024-11-29 11:56:44.127468] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:12:38.859 [2024-11-29 11:56:44.135243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:12:38.859 [2024-11-29 11:56:44.135307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:12:38.859 [2024-11-29 11:56:44.143289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.859 [2024-11-29 11:56:44.143366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:38.859 [2024-11-29 11:56:44.143403] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:38.859 [2024-11-29 11:56:44.230325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.859 [2024-11-29 11:56:44.230474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.859 [2024-11-29 11:56:44.230523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.859 [2024-11-29 11:56:44.230558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.859 [2024-11-29 11:56:44.231130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.859 [2024-11-29 11:56:44.231199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:12:39.118 11:56:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.118 00:12:39.118 11:56:44 -- common/autotest_common.sh@862 -- # return 0 00:12:39.118 11:56:44 -- json_config/json_config.sh@115 -- # echo '' 00:12:39.118 11:56:44 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:12:39.118 INFO: Checking if target configuration is the same... 00:12:39.118 11:56:44 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:39.118 11:56:44 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:39.118 11:56:44 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:12:39.118 11:56:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:39.118 + '[' 2 -ne 2 ']' 00:12:39.118 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:39.118 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:39.118 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:39.118 +++ basename /dev/fd/62 00:12:39.118 ++ mktemp /tmp/62.XXX 00:12:39.118 + tmp_file_1=/tmp/62.Sy8 00:12:39.118 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:39.118 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:39.118 + tmp_file_2=/tmp/spdk_tgt_config.json.5eT 00:12:39.118 + ret=0 00:12:39.118 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:39.376 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:39.376 + diff -u /tmp/62.Sy8 /tmp/spdk_tgt_config.json.5eT 00:12:39.376 INFO: JSON config files are the same 00:12:39.376 + echo 'INFO: JSON config files are the same' 00:12:39.376 + rm /tmp/62.Sy8 /tmp/spdk_tgt_config.json.5eT 00:12:39.376 + exit 0 00:12:39.376 11:56:44 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:12:39.376 INFO: changing configuration and checking if this can be detected... 00:12:39.376 11:56:44 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:39.376 11:56:44 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:39.376 11:56:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:39.635 11:56:45 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:39.635 11:56:45 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:12:39.635 11:56:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:39.635 + '[' 2 -ne 2 ']' 00:12:39.635 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:39.893 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:39.893 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:39.893 +++ basename /dev/fd/62 00:12:39.893 ++ mktemp /tmp/62.XXX 00:12:39.893 + tmp_file_1=/tmp/62.xBX 00:12:39.893 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:39.893 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:39.893 + tmp_file_2=/tmp/spdk_tgt_config.json.oiY 00:12:39.893 + ret=0 00:12:39.893 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:40.154 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:40.154 + diff -u /tmp/62.xBX /tmp/spdk_tgt_config.json.oiY 00:12:40.154 + ret=1 00:12:40.154 + echo '=== Start of file: /tmp/62.xBX ===' 00:12:40.154 + cat /tmp/62.xBX 00:12:40.154 + echo '=== End of file: /tmp/62.xBX ===' 00:12:40.154 + echo '' 00:12:40.154 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oiY ===' 00:12:40.154 + cat /tmp/spdk_tgt_config.json.oiY 00:12:40.154 + echo '=== End of file: /tmp/spdk_tgt_config.json.oiY ===' 00:12:40.154 + echo '' 00:12:40.154 + rm /tmp/62.xBX /tmp/spdk_tgt_config.json.oiY 00:12:40.154 + exit 1 00:12:40.154 INFO: configuration change detected. 00:12:40.154 11:56:45 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:12:40.154 11:56:45 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:12:40.154 11:56:45 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:12:40.154 11:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.154 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.154 11:56:45 -- json_config/json_config.sh@360 -- # local ret=0 00:12:40.154 11:56:45 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:12:40.154 11:56:45 -- json_config/json_config.sh@370 -- # [[ -n 116031 ]] 00:12:40.154 11:56:45 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:12:40.154 11:56:45 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:12:40.154 11:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.154 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.154 11:56:45 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:12:40.154 11:56:45 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:12:40.154 11:56:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:12:40.424 11:56:45 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:12:40.424 11:56:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:12:40.683 11:56:46 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:12:40.683 11:56:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:12:40.941 11:56:46 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:12:40.941 11:56:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:12:41.199 11:56:46 -- json_config/json_config.sh@246 -- # uname -s 00:12:41.199 11:56:46 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:12:41.199 11:56:46 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:12:41.199 11:56:46 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:12:41.199 11:56:46 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:12:41.199 11:56:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.199 11:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.199 11:56:46 -- json_config/json_config.sh@376 -- # killprocess 116031 00:12:41.199 11:56:46 -- common/autotest_common.sh@936 -- # '[' -z 116031 ']' 00:12:41.199 11:56:46 -- common/autotest_common.sh@940 -- # kill -0 116031 00:12:41.199 11:56:46 -- common/autotest_common.sh@941 -- # uname 00:12:41.199 11:56:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:41.199 11:56:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116031 00:12:41.199 11:56:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:41.199 11:56:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:41.199 killing process with pid 116031 00:12:41.199 11:56:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116031' 00:12:41.199 11:56:46 -- common/autotest_common.sh@955 -- # kill 116031 00:12:41.199 11:56:46 -- common/autotest_common.sh@960 -- # wait 116031 00:12:41.457 11:56:46 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:41.457 11:56:46 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:12:41.457 11:56:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.457 11:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.717 11:56:46 -- json_config/json_config.sh@381 -- # return 0 00:12:41.717 INFO: Success 00:12:41.717 11:56:46 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:12:41.717 00:12:41.717 real 0m11.842s 00:12:41.717 user 0m18.244s 00:12:41.717 sys 0m2.380s 00:12:41.717 11:56:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.717 11:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.717 ************************************ 00:12:41.717 END TEST json_config 00:12:41.717 ************************************ 00:12:41.717 11:56:47 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:41.717 11:56:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:41.717 11:56:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.717 11:56:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.717 ************************************ 00:12:41.717 START TEST json_config_extra_key 00:12:41.717 ************************************ 00:12:41.717 11:56:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:41.717 11:56:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:41.717 11:56:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:41.717 11:56:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.717 11:56:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.717 11:56:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.717 11:56:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.717 11:56:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.717 11:56:47 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.717 11:56:47 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.717 11:56:47 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.717 11:56:47 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.717 11:56:47 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.717 11:56:47 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.717 11:56:47 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.717 11:56:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.717 11:56:47 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.717 11:56:47 -- scripts/common.sh@344 -- # : 1 00:12:41.717 11:56:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.717 11:56:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.717 11:56:47 -- scripts/common.sh@364 -- # decimal 1 00:12:41.717 11:56:47 -- scripts/common.sh@352 -- # local d=1 00:12:41.717 11:56:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.717 11:56:47 -- scripts/common.sh@354 -- # echo 1 00:12:41.717 11:56:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.717 11:56:47 -- scripts/common.sh@365 -- # decimal 2 00:12:41.717 11:56:47 -- scripts/common.sh@352 -- # local d=2 00:12:41.717 11:56:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.717 11:56:47 -- scripts/common.sh@354 -- # echo 2 00:12:41.717 11:56:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.717 11:56:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.717 11:56:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.717 11:56:47 -- scripts/common.sh@367 -- # return 0 00:12:41.717 11:56:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.717 11:56:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.717 --rc genhtml_branch_coverage=1 00:12:41.717 --rc genhtml_function_coverage=1 00:12:41.717 --rc genhtml_legend=1 00:12:41.717 --rc geninfo_all_blocks=1 00:12:41.717 --rc geninfo_unexecuted_blocks=1 00:12:41.717 00:12:41.717 ' 00:12:41.717 11:56:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.717 --rc genhtml_branch_coverage=1 00:12:41.717 --rc genhtml_function_coverage=1 00:12:41.717 --rc genhtml_legend=1 00:12:41.717 --rc geninfo_all_blocks=1 00:12:41.717 --rc geninfo_unexecuted_blocks=1 00:12:41.717 00:12:41.717 ' 00:12:41.717 11:56:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.717 --rc genhtml_branch_coverage=1 00:12:41.717 --rc genhtml_function_coverage=1 00:12:41.717 --rc genhtml_legend=1 00:12:41.717 --rc geninfo_all_blocks=1 00:12:41.717 --rc geninfo_unexecuted_blocks=1 00:12:41.717 00:12:41.717 ' 00:12:41.717 11:56:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.717 --rc genhtml_branch_coverage=1 00:12:41.717 --rc genhtml_function_coverage=1 00:12:41.717 --rc genhtml_legend=1 00:12:41.717 --rc geninfo_all_blocks=1 00:12:41.717 --rc geninfo_unexecuted_blocks=1 00:12:41.717 00:12:41.717 ' 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.717 11:56:47 -- nvmf/common.sh@7 -- # uname -s 00:12:41.717 11:56:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.717 11:56:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.717 11:56:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.717 11:56:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.717 11:56:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.717 11:56:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.717 11:56:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.717 11:56:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.717 11:56:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.717 11:56:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.717 11:56:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:892af486-2f15-4d80-97ef-ba30a43304b9 00:12:41.717 11:56:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=892af486-2f15-4d80-97ef-ba30a43304b9 00:12:41.717 11:56:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.717 11:56:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.717 11:56:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:41.717 11:56:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.717 11:56:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.717 11:56:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.717 11:56:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.717 11:56:47 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:41.717 11:56:47 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:41.717 11:56:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:41.717 11:56:47 -- paths/export.sh@5 -- # export PATH 00:12:41.717 11:56:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:41.717 11:56:47 -- nvmf/common.sh@46 -- # : 0 00:12:41.717 11:56:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.717 11:56:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.717 11:56:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.717 11:56:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.717 11:56:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.717 11:56:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.717 11:56:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.717 11:56:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:41.717 11:56:47 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:41.718 INFO: launching applications... 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@25 -- # shift 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=116204 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:12:41.718 Waiting for target to run... 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 116204 /var/tmp/spdk_tgt.sock 00:12:41.718 11:56:47 -- common/autotest_common.sh@829 -- # '[' -z 116204 ']' 00:12:41.718 11:56:47 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:41.718 11:56:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:41.718 11:56:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:41.718 11:56:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:41.718 11:56:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.718 11:56:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.976 [2024-11-29 11:56:47.241168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:41.976 [2024-11-29 11:56:47.241422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116204 ] 00:12:42.235 [2024-11-29 11:56:47.669760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.235 [2024-11-29 11:56:47.740192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.235 [2024-11-29 11:56:47.740504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.800 11:56:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.800 00:12:42.800 11:56:48 -- common/autotest_common.sh@862 -- # return 0 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:12:42.800 INFO: shutting down applications... 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 116204 ]] 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 116204 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@50 -- # kill -0 116204 00:12:42.800 11:56:48 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@50 -- # kill -0 116204 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@52 -- # break 00:12:43.367 SPDK target shutdown done 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:12:43.367 Success 00:12:43.367 11:56:48 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:12:43.367 00:12:43.367 real 0m1.664s 00:12:43.367 user 0m1.552s 00:12:43.367 sys 0m0.485s 00:12:43.367 11:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:43.367 ************************************ 00:12:43.367 11:56:48 -- common/autotest_common.sh@10 -- # set +x 00:12:43.367 END TEST json_config_extra_key 00:12:43.367 ************************************ 00:12:43.367 11:56:48 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:43.367 11:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:43.367 11:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.367 11:56:48 -- common/autotest_common.sh@10 -- # set +x 00:12:43.367 ************************************ 00:12:43.367 START TEST alias_rpc 00:12:43.367 ************************************ 00:12:43.367 11:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:43.367 * Looking for test storage... 00:12:43.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:43.367 11:56:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:43.367 11:56:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:43.367 11:56:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:43.626 11:56:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:43.626 11:56:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:43.626 11:56:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:43.626 11:56:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:43.626 11:56:48 -- scripts/common.sh@335 -- # IFS=.-: 00:12:43.626 11:56:48 -- scripts/common.sh@335 -- # read -ra ver1 00:12:43.626 11:56:48 -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.626 11:56:48 -- scripts/common.sh@336 -- # read -ra ver2 00:12:43.626 11:56:48 -- scripts/common.sh@337 -- # local 'op=<' 00:12:43.626 11:56:48 -- scripts/common.sh@339 -- # ver1_l=2 00:12:43.626 11:56:48 -- scripts/common.sh@340 -- # ver2_l=1 00:12:43.626 11:56:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:43.626 11:56:48 -- scripts/common.sh@343 -- # case "$op" in 00:12:43.626 11:56:48 -- scripts/common.sh@344 -- # : 1 00:12:43.626 11:56:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:43.626 11:56:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.626 11:56:48 -- scripts/common.sh@364 -- # decimal 1 00:12:43.626 11:56:48 -- scripts/common.sh@352 -- # local d=1 00:12:43.626 11:56:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.626 11:56:48 -- scripts/common.sh@354 -- # echo 1 00:12:43.626 11:56:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:43.626 11:56:48 -- scripts/common.sh@365 -- # decimal 2 00:12:43.626 11:56:48 -- scripts/common.sh@352 -- # local d=2 00:12:43.626 11:56:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.626 11:56:48 -- scripts/common.sh@354 -- # echo 2 00:12:43.626 11:56:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:43.626 11:56:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:43.626 11:56:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:43.626 11:56:48 -- scripts/common.sh@367 -- # return 0 00:12:43.626 11:56:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.626 11:56:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.626 --rc genhtml_branch_coverage=1 00:12:43.626 --rc genhtml_function_coverage=1 00:12:43.626 --rc genhtml_legend=1 00:12:43.626 --rc geninfo_all_blocks=1 00:12:43.626 --rc geninfo_unexecuted_blocks=1 00:12:43.626 00:12:43.626 ' 00:12:43.626 11:56:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.626 --rc genhtml_branch_coverage=1 00:12:43.626 --rc genhtml_function_coverage=1 00:12:43.626 --rc genhtml_legend=1 00:12:43.626 --rc geninfo_all_blocks=1 00:12:43.626 --rc geninfo_unexecuted_blocks=1 00:12:43.626 00:12:43.626 ' 00:12:43.626 11:56:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.626 --rc genhtml_branch_coverage=1 00:12:43.626 --rc genhtml_function_coverage=1 00:12:43.626 --rc genhtml_legend=1 00:12:43.626 --rc geninfo_all_blocks=1 00:12:43.626 --rc geninfo_unexecuted_blocks=1 00:12:43.626 00:12:43.626 ' 00:12:43.626 11:56:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.626 --rc genhtml_branch_coverage=1 00:12:43.626 --rc genhtml_function_coverage=1 00:12:43.626 --rc genhtml_legend=1 00:12:43.626 --rc geninfo_all_blocks=1 00:12:43.626 --rc geninfo_unexecuted_blocks=1 00:12:43.626 00:12:43.626 ' 00:12:43.626 11:56:48 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:43.626 11:56:48 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=116282 00:12:43.626 11:56:48 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 116282 00:12:43.626 11:56:48 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:43.626 11:56:48 -- common/autotest_common.sh@829 -- # '[' -z 116282 ']' 00:12:43.626 11:56:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.626 11:56:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.626 11:56:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.626 11:56:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.626 11:56:48 -- common/autotest_common.sh@10 -- # set +x 00:12:43.626 [2024-11-29 11:56:48.968321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:43.626 [2024-11-29 11:56:48.968545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116282 ] 00:12:43.626 [2024-11-29 11:56:49.109444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.885 [2024-11-29 11:56:49.200589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:43.885 [2024-11-29 11:56:49.200824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.819 11:56:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.819 11:56:49 -- common/autotest_common.sh@862 -- # return 0 00:12:44.819 11:56:49 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:44.819 11:56:50 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 116282 00:12:44.819 11:56:50 -- common/autotest_common.sh@936 -- # '[' -z 116282 ']' 00:12:44.819 11:56:50 -- common/autotest_common.sh@940 -- # kill -0 116282 00:12:44.819 11:56:50 -- common/autotest_common.sh@941 -- # uname 00:12:44.819 11:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:44.819 11:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116282 00:12:44.819 11:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:44.819 11:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:44.819 killing process with pid 116282 00:12:44.819 11:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116282' 00:12:44.819 11:56:50 -- common/autotest_common.sh@955 -- # kill 116282 00:12:44.819 11:56:50 -- common/autotest_common.sh@960 -- # wait 116282 00:12:45.386 00:12:45.386 real 0m1.956s 00:12:45.386 user 0m2.177s 00:12:45.386 sys 0m0.497s 00:12:45.386 11:56:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.386 ************************************ 00:12:45.386 11:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.386 END TEST alias_rpc 00:12:45.386 ************************************ 00:12:45.386 11:56:50 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:12:45.386 11:56:50 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:45.386 11:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:45.386 11:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.386 11:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.386 ************************************ 00:12:45.386 START TEST spdkcli_tcp 00:12:45.386 ************************************ 00:12:45.386 11:56:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:45.386 * Looking for test storage... 00:12:45.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:12:45.386 11:56:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:45.386 11:56:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:45.386 11:56:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:45.644 11:56:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:45.644 11:56:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:45.644 11:56:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:45.644 11:56:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:45.644 11:56:50 -- scripts/common.sh@335 -- # IFS=.-: 00:12:45.644 11:56:50 -- scripts/common.sh@335 -- # read -ra ver1 00:12:45.644 11:56:50 -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.644 11:56:50 -- scripts/common.sh@336 -- # read -ra ver2 00:12:45.644 11:56:50 -- scripts/common.sh@337 -- # local 'op=<' 00:12:45.644 11:56:50 -- scripts/common.sh@339 -- # ver1_l=2 00:12:45.644 11:56:50 -- scripts/common.sh@340 -- # ver2_l=1 00:12:45.644 11:56:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:45.644 11:56:50 -- scripts/common.sh@343 -- # case "$op" in 00:12:45.644 11:56:50 -- scripts/common.sh@344 -- # : 1 00:12:45.644 11:56:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:45.644 11:56:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.644 11:56:50 -- scripts/common.sh@364 -- # decimal 1 00:12:45.644 11:56:50 -- scripts/common.sh@352 -- # local d=1 00:12:45.644 11:56:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.644 11:56:50 -- scripts/common.sh@354 -- # echo 1 00:12:45.644 11:56:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:45.644 11:56:50 -- scripts/common.sh@365 -- # decimal 2 00:12:45.644 11:56:50 -- scripts/common.sh@352 -- # local d=2 00:12:45.644 11:56:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.644 11:56:50 -- scripts/common.sh@354 -- # echo 2 00:12:45.644 11:56:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:45.644 11:56:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:45.644 11:56:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:45.644 11:56:50 -- scripts/common.sh@367 -- # return 0 00:12:45.644 11:56:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.644 11:56:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.644 --rc genhtml_branch_coverage=1 00:12:45.644 --rc genhtml_function_coverage=1 00:12:45.644 --rc genhtml_legend=1 00:12:45.644 --rc geninfo_all_blocks=1 00:12:45.644 --rc geninfo_unexecuted_blocks=1 00:12:45.644 00:12:45.644 ' 00:12:45.644 11:56:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.644 --rc genhtml_branch_coverage=1 00:12:45.644 --rc genhtml_function_coverage=1 00:12:45.644 --rc genhtml_legend=1 00:12:45.644 --rc geninfo_all_blocks=1 00:12:45.644 --rc geninfo_unexecuted_blocks=1 00:12:45.644 00:12:45.644 ' 00:12:45.644 11:56:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.644 --rc genhtml_branch_coverage=1 00:12:45.644 --rc genhtml_function_coverage=1 00:12:45.644 --rc genhtml_legend=1 00:12:45.644 --rc geninfo_all_blocks=1 00:12:45.644 --rc geninfo_unexecuted_blocks=1 00:12:45.644 00:12:45.644 ' 00:12:45.644 11:56:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.644 --rc genhtml_branch_coverage=1 00:12:45.644 --rc genhtml_function_coverage=1 00:12:45.644 --rc genhtml_legend=1 00:12:45.644 --rc geninfo_all_blocks=1 00:12:45.644 --rc geninfo_unexecuted_blocks=1 00:12:45.644 00:12:45.644 ' 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:12:45.644 11:56:50 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:12:45.644 11:56:50 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:45.644 11:56:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:45.644 11:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=116376 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@27 -- # waitforlisten 116376 00:12:45.644 11:56:50 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:45.644 11:56:50 -- common/autotest_common.sh@829 -- # '[' -z 116376 ']' 00:12:45.644 11:56:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.644 11:56:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.644 11:56:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.644 11:56:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.644 11:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.644 [2024-11-29 11:56:50.981427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:45.644 [2024-11-29 11:56:50.981682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116376 ] 00:12:45.644 [2024-11-29 11:56:51.136652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.903 [2024-11-29 11:56:51.234291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:45.903 [2024-11-29 11:56:51.234760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.903 [2024-11-29 11:56:51.234774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.520 11:56:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.520 11:56:51 -- common/autotest_common.sh@862 -- # return 0 00:12:46.520 11:56:51 -- spdkcli/tcp.sh@31 -- # socat_pid=116400 00:12:46.520 11:56:51 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:46.520 11:56:51 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:46.778 [ 00:12:46.778 "spdk_get_version", 00:12:46.778 "rpc_get_methods", 00:12:46.778 "trace_get_info", 00:12:46.778 "trace_get_tpoint_group_mask", 00:12:46.778 "trace_disable_tpoint_group", 00:12:46.778 "trace_enable_tpoint_group", 00:12:46.778 "trace_clear_tpoint_mask", 00:12:46.778 "trace_set_tpoint_mask", 00:12:46.778 "framework_get_pci_devices", 00:12:46.778 "framework_get_config", 00:12:46.778 "framework_get_subsystems", 00:12:46.778 "iobuf_get_stats", 00:12:46.778 "iobuf_set_options", 00:12:46.778 "sock_set_default_impl", 00:12:46.778 "sock_impl_set_options", 00:12:46.778 "sock_impl_get_options", 00:12:46.778 "vmd_rescan", 00:12:46.778 "vmd_remove_device", 00:12:46.778 "vmd_enable", 00:12:46.778 "accel_get_stats", 00:12:46.778 "accel_set_options", 00:12:46.778 "accel_set_driver", 00:12:46.778 "accel_crypto_key_destroy", 00:12:46.778 "accel_crypto_keys_get", 00:12:46.778 "accel_crypto_key_create", 00:12:46.778 "accel_assign_opc", 00:12:46.778 "accel_get_module_info", 00:12:46.778 "accel_get_opc_assignments", 00:12:46.778 "notify_get_notifications", 00:12:46.778 "notify_get_types", 00:12:46.778 "bdev_get_histogram", 00:12:46.778 "bdev_enable_histogram", 00:12:46.778 "bdev_set_qos_limit", 00:12:46.778 "bdev_set_qd_sampling_period", 00:12:46.778 "bdev_get_bdevs", 00:12:46.778 "bdev_reset_iostat", 00:12:46.778 "bdev_get_iostat", 00:12:46.778 "bdev_examine", 00:12:46.778 "bdev_wait_for_examine", 00:12:46.778 "bdev_set_options", 00:12:46.778 "scsi_get_devices", 00:12:46.778 "thread_set_cpumask", 00:12:46.778 "framework_get_scheduler", 00:12:46.778 "framework_set_scheduler", 00:12:46.778 "framework_get_reactors", 00:12:46.778 "thread_get_io_channels", 00:12:46.778 "thread_get_pollers", 00:12:46.778 "thread_get_stats", 00:12:46.778 "framework_monitor_context_switch", 00:12:46.778 "spdk_kill_instance", 00:12:46.778 "log_enable_timestamps", 00:12:46.778 "log_get_flags", 00:12:46.778 "log_clear_flag", 00:12:46.778 "log_set_flag", 00:12:46.778 "log_get_level", 00:12:46.778 "log_set_level", 00:12:46.778 "log_get_print_level", 00:12:46.778 "log_set_print_level", 00:12:46.778 "framework_enable_cpumask_locks", 00:12:46.778 "framework_disable_cpumask_locks", 00:12:46.778 "framework_wait_init", 00:12:46.778 "framework_start_init", 00:12:46.778 "virtio_blk_create_transport", 00:12:46.778 "virtio_blk_get_transports", 00:12:46.778 "vhost_controller_set_coalescing", 00:12:46.778 "vhost_get_controllers", 00:12:46.778 "vhost_delete_controller", 00:12:46.778 "vhost_create_blk_controller", 00:12:46.778 "vhost_scsi_controller_remove_target", 00:12:46.778 "vhost_scsi_controller_add_target", 00:12:46.778 "vhost_start_scsi_controller", 00:12:46.778 "vhost_create_scsi_controller", 00:12:46.778 "nbd_get_disks", 00:12:46.778 "nbd_stop_disk", 00:12:46.778 "nbd_start_disk", 00:12:46.778 "env_dpdk_get_mem_stats", 00:12:46.778 "nvmf_subsystem_get_listeners", 00:12:46.778 "nvmf_subsystem_get_qpairs", 00:12:46.778 "nvmf_subsystem_get_controllers", 00:12:46.778 "nvmf_get_stats", 00:12:46.778 "nvmf_get_transports", 00:12:46.778 "nvmf_create_transport", 00:12:46.778 "nvmf_get_targets", 00:12:46.778 "nvmf_delete_target", 00:12:46.778 "nvmf_create_target", 00:12:46.778 "nvmf_subsystem_allow_any_host", 00:12:46.778 "nvmf_subsystem_remove_host", 00:12:46.778 "nvmf_subsystem_add_host", 00:12:46.778 "nvmf_subsystem_remove_ns", 00:12:46.778 "nvmf_subsystem_add_ns", 00:12:46.778 "nvmf_subsystem_listener_set_ana_state", 00:12:46.778 "nvmf_discovery_get_referrals", 00:12:46.778 "nvmf_discovery_remove_referral", 00:12:46.778 "nvmf_discovery_add_referral", 00:12:46.778 "nvmf_subsystem_remove_listener", 00:12:46.778 "nvmf_subsystem_add_listener", 00:12:46.778 "nvmf_delete_subsystem", 00:12:46.778 "nvmf_create_subsystem", 00:12:46.778 "nvmf_get_subsystems", 00:12:46.778 "nvmf_set_crdt", 00:12:46.778 "nvmf_set_config", 00:12:46.778 "nvmf_set_max_subsystems", 00:12:46.778 "iscsi_set_options", 00:12:46.778 "iscsi_get_auth_groups", 00:12:46.778 "iscsi_auth_group_remove_secret", 00:12:46.778 "iscsi_auth_group_add_secret", 00:12:46.778 "iscsi_delete_auth_group", 00:12:46.778 "iscsi_create_auth_group", 00:12:46.778 "iscsi_set_discovery_auth", 00:12:46.778 "iscsi_get_options", 00:12:46.778 "iscsi_target_node_request_logout", 00:12:46.778 "iscsi_target_node_set_redirect", 00:12:46.778 "iscsi_target_node_set_auth", 00:12:46.778 "iscsi_target_node_add_lun", 00:12:46.778 "iscsi_get_connections", 00:12:46.778 "iscsi_portal_group_set_auth", 00:12:46.778 "iscsi_start_portal_group", 00:12:46.778 "iscsi_delete_portal_group", 00:12:46.778 "iscsi_create_portal_group", 00:12:46.778 "iscsi_get_portal_groups", 00:12:46.778 "iscsi_delete_target_node", 00:12:46.778 "iscsi_target_node_remove_pg_ig_maps", 00:12:46.778 "iscsi_target_node_add_pg_ig_maps", 00:12:46.778 "iscsi_create_target_node", 00:12:46.778 "iscsi_get_target_nodes", 00:12:46.778 "iscsi_delete_initiator_group", 00:12:46.778 "iscsi_initiator_group_remove_initiators", 00:12:46.778 "iscsi_initiator_group_add_initiators", 00:12:46.778 "iscsi_create_initiator_group", 00:12:46.778 "iscsi_get_initiator_groups", 00:12:46.778 "iaa_scan_accel_module", 00:12:46.778 "dsa_scan_accel_module", 00:12:46.778 "ioat_scan_accel_module", 00:12:46.778 "accel_error_inject_error", 00:12:46.778 "bdev_iscsi_delete", 00:12:46.778 "bdev_iscsi_create", 00:12:46.778 "bdev_iscsi_set_options", 00:12:46.778 "bdev_virtio_attach_controller", 00:12:46.778 "bdev_virtio_scsi_get_devices", 00:12:46.778 "bdev_virtio_detach_controller", 00:12:46.778 "bdev_virtio_blk_set_hotplug", 00:12:46.778 "bdev_ftl_set_property", 00:12:46.778 "bdev_ftl_get_properties", 00:12:46.779 "bdev_ftl_get_stats", 00:12:46.779 "bdev_ftl_unmap", 00:12:46.779 "bdev_ftl_unload", 00:12:46.779 "bdev_ftl_delete", 00:12:46.779 "bdev_ftl_load", 00:12:46.779 "bdev_ftl_create", 00:12:46.779 "bdev_aio_delete", 00:12:46.779 "bdev_aio_rescan", 00:12:46.779 "bdev_aio_create", 00:12:46.779 "blobfs_create", 00:12:46.779 "blobfs_detect", 00:12:46.779 "blobfs_set_cache_size", 00:12:46.779 "bdev_zone_block_delete", 00:12:46.779 "bdev_zone_block_create", 00:12:46.779 "bdev_delay_delete", 00:12:46.779 "bdev_delay_create", 00:12:46.779 "bdev_delay_update_latency", 00:12:46.779 "bdev_split_delete", 00:12:46.779 "bdev_split_create", 00:12:46.779 "bdev_error_inject_error", 00:12:46.779 "bdev_error_delete", 00:12:46.779 "bdev_error_create", 00:12:46.779 "bdev_raid_set_options", 00:12:46.779 "bdev_raid_remove_base_bdev", 00:12:46.779 "bdev_raid_add_base_bdev", 00:12:46.779 "bdev_raid_delete", 00:12:46.779 "bdev_raid_create", 00:12:46.779 "bdev_raid_get_bdevs", 00:12:46.779 "bdev_lvol_grow_lvstore", 00:12:46.779 "bdev_lvol_get_lvols", 00:12:46.779 "bdev_lvol_get_lvstores", 00:12:46.779 "bdev_lvol_delete", 00:12:46.779 "bdev_lvol_set_read_only", 00:12:46.779 "bdev_lvol_resize", 00:12:46.779 "bdev_lvol_decouple_parent", 00:12:46.779 "bdev_lvol_inflate", 00:12:46.779 "bdev_lvol_rename", 00:12:46.779 "bdev_lvol_clone_bdev", 00:12:46.779 "bdev_lvol_clone", 00:12:46.779 "bdev_lvol_snapshot", 00:12:46.779 "bdev_lvol_create", 00:12:46.779 "bdev_lvol_delete_lvstore", 00:12:46.779 "bdev_lvol_rename_lvstore", 00:12:46.779 "bdev_lvol_create_lvstore", 00:12:46.779 "bdev_passthru_delete", 00:12:46.779 "bdev_passthru_create", 00:12:46.779 "bdev_nvme_cuse_unregister", 00:12:46.779 "bdev_nvme_cuse_register", 00:12:46.779 "bdev_opal_new_user", 00:12:46.779 "bdev_opal_set_lock_state", 00:12:46.779 "bdev_opal_delete", 00:12:46.779 "bdev_opal_get_info", 00:12:46.779 "bdev_opal_create", 00:12:46.779 "bdev_nvme_opal_revert", 00:12:46.779 "bdev_nvme_opal_init", 00:12:46.779 "bdev_nvme_send_cmd", 00:12:46.779 "bdev_nvme_get_path_iostat", 00:12:46.779 "bdev_nvme_get_mdns_discovery_info", 00:12:46.779 "bdev_nvme_stop_mdns_discovery", 00:12:46.779 "bdev_nvme_start_mdns_discovery", 00:12:46.779 "bdev_nvme_set_multipath_policy", 00:12:46.779 "bdev_nvme_set_preferred_path", 00:12:46.779 "bdev_nvme_get_io_paths", 00:12:46.779 "bdev_nvme_remove_error_injection", 00:12:46.779 "bdev_nvme_add_error_injection", 00:12:46.779 "bdev_nvme_get_discovery_info", 00:12:46.779 "bdev_nvme_stop_discovery", 00:12:46.779 "bdev_nvme_start_discovery", 00:12:46.779 "bdev_nvme_get_controller_health_info", 00:12:46.779 "bdev_nvme_disable_controller", 00:12:46.779 "bdev_nvme_enable_controller", 00:12:46.779 "bdev_nvme_reset_controller", 00:12:46.779 "bdev_nvme_get_transport_statistics", 00:12:46.779 "bdev_nvme_apply_firmware", 00:12:46.779 "bdev_nvme_detach_controller", 00:12:46.779 "bdev_nvme_get_controllers", 00:12:46.779 "bdev_nvme_attach_controller", 00:12:46.779 "bdev_nvme_set_hotplug", 00:12:46.779 "bdev_nvme_set_options", 00:12:46.779 "bdev_null_resize", 00:12:46.779 "bdev_null_delete", 00:12:46.779 "bdev_null_create", 00:12:46.779 "bdev_malloc_delete", 00:12:46.779 "bdev_malloc_create" 00:12:46.779 ] 00:12:46.779 11:56:52 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:46.779 11:56:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.779 11:56:52 -- common/autotest_common.sh@10 -- # set +x 00:12:46.779 11:56:52 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:46.779 11:56:52 -- spdkcli/tcp.sh@38 -- # killprocess 116376 00:12:46.779 11:56:52 -- common/autotest_common.sh@936 -- # '[' -z 116376 ']' 00:12:46.779 11:56:52 -- common/autotest_common.sh@940 -- # kill -0 116376 00:12:46.779 11:56:52 -- common/autotest_common.sh@941 -- # uname 00:12:46.779 11:56:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:46.779 11:56:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116376 00:12:46.779 11:56:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:46.779 11:56:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:46.779 killing process with pid 116376 00:12:46.779 11:56:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116376' 00:12:46.779 11:56:52 -- common/autotest_common.sh@955 -- # kill 116376 00:12:46.779 11:56:52 -- common/autotest_common.sh@960 -- # wait 116376 00:12:47.345 00:12:47.345 real 0m1.898s 00:12:47.345 user 0m3.360s 00:12:47.345 sys 0m0.491s 00:12:47.345 11:56:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:47.345 11:56:52 -- common/autotest_common.sh@10 -- # set +x 00:12:47.345 ************************************ 00:12:47.345 END TEST spdkcli_tcp 00:12:47.345 ************************************ 00:12:47.345 11:56:52 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:47.345 11:56:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:47.345 11:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:47.345 11:56:52 -- common/autotest_common.sh@10 -- # set +x 00:12:47.345 ************************************ 00:12:47.345 START TEST dpdk_mem_utility 00:12:47.345 ************************************ 00:12:47.345 11:56:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:47.345 * Looking for test storage... 00:12:47.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:47.345 11:56:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:47.345 11:56:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:47.345 11:56:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:47.345 11:56:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:47.345 11:56:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:47.345 11:56:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:47.345 11:56:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:47.345 11:56:52 -- scripts/common.sh@335 -- # IFS=.-: 00:12:47.345 11:56:52 -- scripts/common.sh@335 -- # read -ra ver1 00:12:47.345 11:56:52 -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.345 11:56:52 -- scripts/common.sh@336 -- # read -ra ver2 00:12:47.345 11:56:52 -- scripts/common.sh@337 -- # local 'op=<' 00:12:47.345 11:56:52 -- scripts/common.sh@339 -- # ver1_l=2 00:12:47.345 11:56:52 -- scripts/common.sh@340 -- # ver2_l=1 00:12:47.345 11:56:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:47.345 11:56:52 -- scripts/common.sh@343 -- # case "$op" in 00:12:47.345 11:56:52 -- scripts/common.sh@344 -- # : 1 00:12:47.345 11:56:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:47.345 11:56:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.345 11:56:52 -- scripts/common.sh@364 -- # decimal 1 00:12:47.345 11:56:52 -- scripts/common.sh@352 -- # local d=1 00:12:47.345 11:56:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.345 11:56:52 -- scripts/common.sh@354 -- # echo 1 00:12:47.345 11:56:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:47.603 11:56:52 -- scripts/common.sh@365 -- # decimal 2 00:12:47.603 11:56:52 -- scripts/common.sh@352 -- # local d=2 00:12:47.603 11:56:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.603 11:56:52 -- scripts/common.sh@354 -- # echo 2 00:12:47.603 11:56:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:47.603 11:56:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:47.603 11:56:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:47.603 11:56:52 -- scripts/common.sh@367 -- # return 0 00:12:47.603 11:56:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.603 11:56:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:47.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.603 --rc genhtml_branch_coverage=1 00:12:47.603 --rc genhtml_function_coverage=1 00:12:47.603 --rc genhtml_legend=1 00:12:47.603 --rc geninfo_all_blocks=1 00:12:47.603 --rc geninfo_unexecuted_blocks=1 00:12:47.603 00:12:47.603 ' 00:12:47.603 11:56:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:47.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.603 --rc genhtml_branch_coverage=1 00:12:47.603 --rc genhtml_function_coverage=1 00:12:47.603 --rc genhtml_legend=1 00:12:47.603 --rc geninfo_all_blocks=1 00:12:47.603 --rc geninfo_unexecuted_blocks=1 00:12:47.603 00:12:47.603 ' 00:12:47.603 11:56:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:47.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.603 --rc genhtml_branch_coverage=1 00:12:47.603 --rc genhtml_function_coverage=1 00:12:47.603 --rc genhtml_legend=1 00:12:47.603 --rc geninfo_all_blocks=1 00:12:47.603 --rc geninfo_unexecuted_blocks=1 00:12:47.603 00:12:47.603 ' 00:12:47.603 11:56:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:47.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.603 --rc genhtml_branch_coverage=1 00:12:47.603 --rc genhtml_function_coverage=1 00:12:47.603 --rc genhtml_legend=1 00:12:47.603 --rc geninfo_all_blocks=1 00:12:47.603 --rc geninfo_unexecuted_blocks=1 00:12:47.603 00:12:47.603 ' 00:12:47.603 11:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:47.603 11:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=116488 00:12:47.603 11:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.603 11:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 116488 00:12:47.603 11:56:52 -- common/autotest_common.sh@829 -- # '[' -z 116488 ']' 00:12:47.603 11:56:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.603 11:56:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.603 11:56:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.603 11:56:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.603 11:56:52 -- common/autotest_common.sh@10 -- # set +x 00:12:47.603 [2024-11-29 11:56:52.912466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:47.603 [2024-11-29 11:56:52.912664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116488 ] 00:12:47.603 [2024-11-29 11:56:53.057667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.862 [2024-11-29 11:56:53.157084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:47.862 [2024-11-29 11:56:53.157389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.428 11:56:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.428 11:56:53 -- common/autotest_common.sh@862 -- # return 0 00:12:48.428 11:56:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:48.428 11:56:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:48.428 11:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 11:56:53 -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 { 00:12:48.428 "filename": "/tmp/spdk_mem_dump.txt" 00:12:48.428 } 00:12:48.428 11:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 11:56:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:48.687 DPDK memory size 814.000000 MiB in 1 heap(s) 00:12:48.687 1 heaps totaling size 814.000000 MiB 00:12:48.687 size: 814.000000 MiB heap id: 0 00:12:48.687 end heaps---------- 00:12:48.687 8 mempools totaling size 598.116089 MiB 00:12:48.687 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:48.687 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:48.687 size: 84.521057 MiB name: bdev_io_116488 00:12:48.687 size: 51.011292 MiB name: evtpool_116488 00:12:48.687 size: 50.003479 MiB name: msgpool_116488 00:12:48.687 size: 21.763794 MiB name: PDU_Pool 00:12:48.687 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:48.687 size: 0.026123 MiB name: Session_Pool 00:12:48.687 end mempools------- 00:12:48.687 6 memzones totaling size 4.142822 MiB 00:12:48.687 size: 1.000366 MiB name: RG_ring_0_116488 00:12:48.687 size: 1.000366 MiB name: RG_ring_1_116488 00:12:48.687 size: 1.000366 MiB name: RG_ring_4_116488 00:12:48.687 size: 1.000366 MiB name: RG_ring_5_116488 00:12:48.687 size: 0.125366 MiB name: RG_ring_2_116488 00:12:48.687 size: 0.015991 MiB name: RG_ring_3_116488 00:12:48.687 end memzones------- 00:12:48.687 11:56:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:48.687 heap id: 0 total size: 814.000000 MiB number of busy elements: 216 number of free elements: 15 00:12:48.687 list of free elements. size: 12.487305 MiB 00:12:48.687 element at address: 0x200000400000 with size: 1.999512 MiB 00:12:48.687 element at address: 0x200018e00000 with size: 0.999878 MiB 00:12:48.687 element at address: 0x200019000000 with size: 0.999878 MiB 00:12:48.687 element at address: 0x200003e00000 with size: 0.996277 MiB 00:12:48.687 element at address: 0x200031c00000 with size: 0.994446 MiB 00:12:48.687 element at address: 0x200013800000 with size: 0.978699 MiB 00:12:48.687 element at address: 0x200007000000 with size: 0.959839 MiB 00:12:48.687 element at address: 0x200019200000 with size: 0.936584 MiB 00:12:48.687 element at address: 0x200000200000 with size: 0.837219 MiB 00:12:48.687 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:12:48.687 element at address: 0x20000b200000 with size: 0.489807 MiB 00:12:48.687 element at address: 0x200000800000 with size: 0.487061 MiB 00:12:48.687 element at address: 0x200019400000 with size: 0.485657 MiB 00:12:48.687 element at address: 0x200027e00000 with size: 0.401978 MiB 00:12:48.687 element at address: 0x200003a00000 with size: 0.351685 MiB 00:12:48.687 list of standard malloc elements. size: 199.250122 MiB 00:12:48.687 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:12:48.687 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:12:48.687 element at address: 0x200018efff80 with size: 1.000122 MiB 00:12:48.687 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:12:48.687 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:12:48.687 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:12:48.687 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:12:48.687 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:12:48.688 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:12:48.688 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003adb300 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003adb500 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003affa80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003affb40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:12:48.688 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:12:48.689 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:12:48.689 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:12:48.689 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:12:48.689 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e66e80 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e66f40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6db40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:12:48.689 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:12:48.689 list of memzone associated elements. size: 602.262573 MiB 00:12:48.689 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:12:48.689 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:48.689 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:12:48.689 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:48.689 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:12:48.689 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_116488_0 00:12:48.689 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:12:48.689 associated memzone info: size: 48.002930 MiB name: MP_evtpool_116488_0 00:12:48.689 element at address: 0x200003fff380 with size: 48.003052 MiB 00:12:48.689 associated memzone info: size: 48.002930 MiB name: MP_msgpool_116488_0 00:12:48.689 element at address: 0x2000195be940 with size: 20.255554 MiB 00:12:48.689 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:48.689 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:12:48.689 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:48.689 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:12:48.689 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_116488 00:12:48.689 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:12:48.689 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_116488 00:12:48.689 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:12:48.689 associated memzone info: size: 1.007996 MiB name: MP_evtpool_116488 00:12:48.689 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:12:48.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:48.689 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:12:48.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:48.689 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:12:48.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:48.689 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:12:48.689 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:48.689 element at address: 0x200003eff180 with size: 1.000488 MiB 00:12:48.689 associated memzone info: size: 1.000366 MiB name: RG_ring_0_116488 00:12:48.689 element at address: 0x200003affc00 with size: 1.000488 MiB 00:12:48.689 associated memzone info: size: 1.000366 MiB name: RG_ring_1_116488 00:12:48.689 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:12:48.689 associated memzone info: size: 1.000366 MiB name: RG_ring_4_116488 00:12:48.689 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:12:48.689 associated memzone info: size: 1.000366 MiB name: RG_ring_5_116488 00:12:48.689 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:12:48.689 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_116488 00:12:48.689 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:12:48.689 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:48.689 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:12:48.689 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:48.689 element at address: 0x20001947c540 with size: 0.250488 MiB 00:12:48.689 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:48.689 element at address: 0x200003adf880 with size: 0.125488 MiB 00:12:48.689 associated memzone info: size: 0.125366 MiB name: RG_ring_2_116488 00:12:48.689 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:12:48.689 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:48.689 element at address: 0x200027e67000 with size: 0.023743 MiB 00:12:48.689 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:48.689 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:12:48.689 associated memzone info: size: 0.015991 MiB name: RG_ring_3_116488 00:12:48.689 element at address: 0x200027e6d140 with size: 0.002441 MiB 00:12:48.689 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:48.689 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:12:48.689 associated memzone info: size: 0.000183 MiB name: MP_msgpool_116488 00:12:48.689 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:12:48.689 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_116488 00:12:48.689 element at address: 0x200027e6dc00 with size: 0.000305 MiB 00:12:48.689 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:48.689 11:56:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:48.689 11:56:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 116488 00:12:48.689 11:56:54 -- common/autotest_common.sh@936 -- # '[' -z 116488 ']' 00:12:48.689 11:56:54 -- common/autotest_common.sh@940 -- # kill -0 116488 00:12:48.689 11:56:54 -- common/autotest_common.sh@941 -- # uname 00:12:48.689 11:56:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.689 11:56:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116488 00:12:48.689 11:56:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:48.689 11:56:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:48.689 killing process with pid 116488 00:12:48.689 11:56:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116488' 00:12:48.689 11:56:54 -- common/autotest_common.sh@955 -- # kill 116488 00:12:48.689 11:56:54 -- common/autotest_common.sh@960 -- # wait 116488 00:12:49.255 00:12:49.255 real 0m1.830s 00:12:49.255 user 0m1.931s 00:12:49.255 sys 0m0.484s 00:12:49.255 11:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.255 11:56:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.255 ************************************ 00:12:49.255 END TEST dpdk_mem_utility 00:12:49.255 ************************************ 00:12:49.255 11:56:54 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:49.255 11:56:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:49.255 11:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.255 11:56:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.255 ************************************ 00:12:49.255 START TEST event 00:12:49.255 ************************************ 00:12:49.255 11:56:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:49.255 * Looking for test storage... 00:12:49.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:49.255 11:56:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:49.255 11:56:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:49.255 11:56:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:49.255 11:56:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:49.255 11:56:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:49.255 11:56:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:49.255 11:56:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:49.255 11:56:54 -- scripts/common.sh@335 -- # IFS=.-: 00:12:49.255 11:56:54 -- scripts/common.sh@335 -- # read -ra ver1 00:12:49.255 11:56:54 -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.255 11:56:54 -- scripts/common.sh@336 -- # read -ra ver2 00:12:49.255 11:56:54 -- scripts/common.sh@337 -- # local 'op=<' 00:12:49.255 11:56:54 -- scripts/common.sh@339 -- # ver1_l=2 00:12:49.255 11:56:54 -- scripts/common.sh@340 -- # ver2_l=1 00:12:49.255 11:56:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:49.256 11:56:54 -- scripts/common.sh@343 -- # case "$op" in 00:12:49.256 11:56:54 -- scripts/common.sh@344 -- # : 1 00:12:49.256 11:56:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:49.256 11:56:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.256 11:56:54 -- scripts/common.sh@364 -- # decimal 1 00:12:49.256 11:56:54 -- scripts/common.sh@352 -- # local d=1 00:12:49.256 11:56:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.256 11:56:54 -- scripts/common.sh@354 -- # echo 1 00:12:49.256 11:56:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:49.256 11:56:54 -- scripts/common.sh@365 -- # decimal 2 00:12:49.256 11:56:54 -- scripts/common.sh@352 -- # local d=2 00:12:49.256 11:56:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.256 11:56:54 -- scripts/common.sh@354 -- # echo 2 00:12:49.256 11:56:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:49.256 11:56:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:49.256 11:56:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:49.256 11:56:54 -- scripts/common.sh@367 -- # return 0 00:12:49.256 11:56:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.256 11:56:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:49.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.256 --rc genhtml_branch_coverage=1 00:12:49.256 --rc genhtml_function_coverage=1 00:12:49.256 --rc genhtml_legend=1 00:12:49.256 --rc geninfo_all_blocks=1 00:12:49.256 --rc geninfo_unexecuted_blocks=1 00:12:49.256 00:12:49.256 ' 00:12:49.256 11:56:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:49.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.256 --rc genhtml_branch_coverage=1 00:12:49.256 --rc genhtml_function_coverage=1 00:12:49.256 --rc genhtml_legend=1 00:12:49.256 --rc geninfo_all_blocks=1 00:12:49.256 --rc geninfo_unexecuted_blocks=1 00:12:49.256 00:12:49.256 ' 00:12:49.256 11:56:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:49.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.256 --rc genhtml_branch_coverage=1 00:12:49.256 --rc genhtml_function_coverage=1 00:12:49.256 --rc genhtml_legend=1 00:12:49.256 --rc geninfo_all_blocks=1 00:12:49.256 --rc geninfo_unexecuted_blocks=1 00:12:49.256 00:12:49.256 ' 00:12:49.256 11:56:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:49.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.256 --rc genhtml_branch_coverage=1 00:12:49.256 --rc genhtml_function_coverage=1 00:12:49.256 --rc genhtml_legend=1 00:12:49.256 --rc geninfo_all_blocks=1 00:12:49.256 --rc geninfo_unexecuted_blocks=1 00:12:49.256 00:12:49.256 ' 00:12:49.256 11:56:54 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:49.256 11:56:54 -- bdev/nbd_common.sh@6 -- # set -e 00:12:49.256 11:56:54 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:49.256 11:56:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:49.256 11:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.256 11:56:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.256 ************************************ 00:12:49.256 START TEST event_perf 00:12:49.256 ************************************ 00:12:49.256 11:56:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:49.535 Running I/O for 1 seconds...[2024-11-29 11:56:54.775751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:49.535 [2024-11-29 11:56:54.775993] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116577 ] 00:12:49.535 [2024-11-29 11:56:54.940966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.535 [2024-11-29 11:56:55.039071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.535 [2024-11-29 11:56:55.039158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.792 Running I/O for 1 seconds...[2024-11-29 11:56:55.039249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.792 [2024-11-29 11:56:55.039250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.745 00:12:50.745 lcore 0: 198919 00:12:50.745 lcore 1: 198917 00:12:50.745 lcore 2: 198917 00:12:50.745 lcore 3: 198918 00:12:50.745 done. 00:12:50.745 00:12:50.745 real 0m1.390s 00:12:50.745 user 0m4.196s 00:12:50.745 sys 0m0.097s 00:12:50.745 11:56:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:50.745 ************************************ 00:12:50.745 END TEST event_perf 00:12:50.745 11:56:56 -- common/autotest_common.sh@10 -- # set +x 00:12:50.745 ************************************ 00:12:50.745 11:56:56 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:50.745 11:56:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:50.745 11:56:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.745 11:56:56 -- common/autotest_common.sh@10 -- # set +x 00:12:50.745 ************************************ 00:12:50.745 START TEST event_reactor 00:12:50.745 ************************************ 00:12:50.745 11:56:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:50.745 [2024-11-29 11:56:56.209251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:50.745 [2024-11-29 11:56:56.209468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116623 ] 00:12:51.003 [2024-11-29 11:56:56.348578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.003 [2024-11-29 11:56:56.437289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.377 test_start 00:12:52.377 oneshot 00:12:52.377 tick 100 00:12:52.377 tick 100 00:12:52.377 tick 250 00:12:52.377 tick 100 00:12:52.377 tick 100 00:12:52.377 tick 100 00:12:52.377 tick 250 00:12:52.377 tick 500 00:12:52.377 tick 100 00:12:52.377 tick 100 00:12:52.377 tick 250 00:12:52.377 tick 100 00:12:52.377 tick 100 00:12:52.377 test_end 00:12:52.377 00:12:52.377 real 0m1.354s 00:12:52.377 user 0m1.153s 00:12:52.377 sys 0m0.100s 00:12:52.377 11:56:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.377 ************************************ 00:12:52.377 END TEST event_reactor 00:12:52.377 ************************************ 00:12:52.377 11:56:57 -- common/autotest_common.sh@10 -- # set +x 00:12:52.377 11:56:57 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:52.377 11:56:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:52.377 11:56:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.377 11:56:57 -- common/autotest_common.sh@10 -- # set +x 00:12:52.377 ************************************ 00:12:52.377 START TEST event_reactor_perf 00:12:52.377 ************************************ 00:12:52.377 11:56:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:52.377 [2024-11-29 11:56:57.620677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:52.377 [2024-11-29 11:56:57.620947] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116668 ] 00:12:52.377 [2024-11-29 11:56:57.769575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.377 [2024-11-29 11:56:57.852848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.754 test_start 00:12:53.754 test_end 00:12:53.754 Performance: 321603 events per second 00:12:53.754 00:12:53.754 real 0m1.368s 00:12:53.754 user 0m1.187s 00:12:53.754 sys 0m0.081s 00:12:53.754 11:56:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:53.754 ************************************ 00:12:53.754 END TEST event_reactor_perf 00:12:53.754 ************************************ 00:12:53.754 11:56:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.754 11:56:59 -- event/event.sh@49 -- # uname -s 00:12:53.754 11:56:59 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:53.754 11:56:59 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:53.754 11:56:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:53.754 11:56:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.754 11:56:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.754 ************************************ 00:12:53.754 START TEST event_scheduler 00:12:53.754 ************************************ 00:12:53.754 11:56:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:53.754 * Looking for test storage... 00:12:53.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:53.754 11:56:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:53.754 11:56:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:53.754 11:56:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:53.754 11:56:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:53.754 11:56:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:53.754 11:56:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:53.754 11:56:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:53.754 11:56:59 -- scripts/common.sh@335 -- # IFS=.-: 00:12:53.754 11:56:59 -- scripts/common.sh@335 -- # read -ra ver1 00:12:53.754 11:56:59 -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.754 11:56:59 -- scripts/common.sh@336 -- # read -ra ver2 00:12:53.754 11:56:59 -- scripts/common.sh@337 -- # local 'op=<' 00:12:53.754 11:56:59 -- scripts/common.sh@339 -- # ver1_l=2 00:12:53.754 11:56:59 -- scripts/common.sh@340 -- # ver2_l=1 00:12:53.754 11:56:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:53.754 11:56:59 -- scripts/common.sh@343 -- # case "$op" in 00:12:53.754 11:56:59 -- scripts/common.sh@344 -- # : 1 00:12:53.754 11:56:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:53.754 11:56:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.754 11:56:59 -- scripts/common.sh@364 -- # decimal 1 00:12:53.754 11:56:59 -- scripts/common.sh@352 -- # local d=1 00:12:53.754 11:56:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.754 11:56:59 -- scripts/common.sh@354 -- # echo 1 00:12:53.754 11:56:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:53.754 11:56:59 -- scripts/common.sh@365 -- # decimal 2 00:12:53.754 11:56:59 -- scripts/common.sh@352 -- # local d=2 00:12:53.754 11:56:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.754 11:56:59 -- scripts/common.sh@354 -- # echo 2 00:12:53.754 11:56:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:53.754 11:56:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:53.754 11:56:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:53.754 11:56:59 -- scripts/common.sh@367 -- # return 0 00:12:53.754 11:56:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.754 11:56:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:53.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.754 --rc genhtml_branch_coverage=1 00:12:53.754 --rc genhtml_function_coverage=1 00:12:53.754 --rc genhtml_legend=1 00:12:53.754 --rc geninfo_all_blocks=1 00:12:53.754 --rc geninfo_unexecuted_blocks=1 00:12:53.754 00:12:53.754 ' 00:12:53.754 11:56:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.755 --rc genhtml_branch_coverage=1 00:12:53.755 --rc genhtml_function_coverage=1 00:12:53.755 --rc genhtml_legend=1 00:12:53.755 --rc geninfo_all_blocks=1 00:12:53.755 --rc geninfo_unexecuted_blocks=1 00:12:53.755 00:12:53.755 ' 00:12:53.755 11:56:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.755 --rc genhtml_branch_coverage=1 00:12:53.755 --rc genhtml_function_coverage=1 00:12:53.755 --rc genhtml_legend=1 00:12:53.755 --rc geninfo_all_blocks=1 00:12:53.755 --rc geninfo_unexecuted_blocks=1 00:12:53.755 00:12:53.755 ' 00:12:53.755 11:56:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.755 --rc genhtml_branch_coverage=1 00:12:53.755 --rc genhtml_function_coverage=1 00:12:53.755 --rc genhtml_legend=1 00:12:53.755 --rc geninfo_all_blocks=1 00:12:53.755 --rc geninfo_unexecuted_blocks=1 00:12:53.755 00:12:53.755 ' 00:12:53.755 11:56:59 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:53.755 11:56:59 -- scheduler/scheduler.sh@35 -- # scheduler_pid=116740 00:12:53.755 11:56:59 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:53.755 11:56:59 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.755 11:56:59 -- scheduler/scheduler.sh@37 -- # waitforlisten 116740 00:12:53.755 11:56:59 -- common/autotest_common.sh@829 -- # '[' -z 116740 ']' 00:12:53.755 11:56:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.755 11:56:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.755 11:56:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.755 11:56:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.755 11:56:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.755 [2024-11-29 11:56:59.229279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:53.755 [2024-11-29 11:56:59.229483] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116740 ] 00:12:54.014 [2024-11-29 11:56:59.395800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.014 [2024-11-29 11:56:59.497891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.014 [2024-11-29 11:56:59.498035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.014 [2024-11-29 11:56:59.498843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.014 [2024-11-29 11:56:59.498897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.950 11:57:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.950 11:57:00 -- common/autotest_common.sh@862 -- # return 0 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 POWER: Env isn't set yet! 00:12:54.950 POWER: Attempting to initialise ACPI cpufreq power management... 00:12:54.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:54.950 POWER: Cannot set governor of lcore 0 to userspace 00:12:54.950 POWER: Attempting to initialise PSTAT power management... 00:12:54.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:54.950 POWER: Cannot set governor of lcore 0 to performance 00:12:54.950 POWER: Attempting to initialise CPPC power management... 00:12:54.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:54.950 POWER: Cannot set governor of lcore 0 to userspace 00:12:54.950 POWER: Attempting to initialise VM power management... 00:12:54.950 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:54.950 POWER: Unable to set Power Management Environment for lcore 0 00:12:54.950 [2024-11-29 11:57:00.187290] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:12:54.950 [2024-11-29 11:57:00.187520] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:12:54.950 [2024-11-29 11:57:00.187725] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:12:54.950 [2024-11-29 11:57:00.187962] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:54.950 [2024-11-29 11:57:00.188157] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:54.950 [2024-11-29 11:57:00.188348] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:54.950 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 [2024-11-29 11:57:00.299372] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:54.950 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:54.950 11:57:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:54.950 11:57:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 ************************************ 00:12:54.950 START TEST scheduler_create_thread 00:12:54.950 ************************************ 00:12:54.950 11:57:00 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 2 00:12:54.950 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 3 00:12:54.950 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 4 00:12:54.950 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 5 00:12:54.950 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.950 11:57:00 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:54.950 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.950 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 6 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 7 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 8 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 9 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 10 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 11:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.951 11:57:00 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:54.951 11:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.951 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 11:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.887 11:57:01 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:55.887 11:57:01 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:55.887 11:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.887 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:12:57.327 11:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.327 00:12:57.327 real 0m2.132s 00:12:57.327 user 0m0.019s 00:12:57.327 sys 0m0.002s 00:12:57.327 11:57:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.327 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:12:57.327 ************************************ 00:12:57.327 END TEST scheduler_create_thread 00:12:57.327 ************************************ 00:12:57.327 11:57:02 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:57.327 11:57:02 -- scheduler/scheduler.sh@46 -- # killprocess 116740 00:12:57.327 11:57:02 -- common/autotest_common.sh@936 -- # '[' -z 116740 ']' 00:12:57.327 11:57:02 -- common/autotest_common.sh@940 -- # kill -0 116740 00:12:57.327 11:57:02 -- common/autotest_common.sh@941 -- # uname 00:12:57.327 11:57:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:57.327 11:57:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116740 00:12:57.327 11:57:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:57.327 11:57:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:57.327 11:57:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116740' 00:12:57.327 killing process with pid 116740 00:12:57.327 11:57:02 -- common/autotest_common.sh@955 -- # kill 116740 00:12:57.327 11:57:02 -- common/autotest_common.sh@960 -- # wait 116740 00:12:57.582 [2024-11-29 11:57:02.925826] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:57.838 00:12:57.838 real 0m4.176s 00:12:57.838 user 0m7.448s 00:12:57.838 sys 0m0.395s 00:12:57.838 ************************************ 00:12:57.838 END TEST event_scheduler 00:12:57.838 ************************************ 00:12:57.838 11:57:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.838 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 11:57:03 -- event/event.sh@51 -- # modprobe -n nbd 00:12:57.838 11:57:03 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:57.838 11:57:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:57.838 11:57:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.838 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 ************************************ 00:12:57.838 START TEST app_repeat 00:12:57.838 ************************************ 00:12:57.838 11:57:03 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:12:57.838 11:57:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.838 11:57:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.838 11:57:03 -- event/event.sh@13 -- # local nbd_list 00:12:57.838 11:57:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:57.838 11:57:03 -- event/event.sh@14 -- # local bdev_list 00:12:57.838 11:57:03 -- event/event.sh@15 -- # local repeat_times=4 00:12:57.838 11:57:03 -- event/event.sh@17 -- # modprobe nbd 00:12:57.838 11:57:03 -- event/event.sh@19 -- # repeat_pid=116853 00:12:57.838 11:57:03 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:57.838 11:57:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:57.838 Process app_repeat pid: 116853 00:12:57.838 11:57:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 116853' 00:12:57.838 11:57:03 -- event/event.sh@23 -- # for i in {0..2} 00:12:57.838 spdk_app_start Round 0 00:12:57.838 11:57:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:57.838 11:57:03 -- event/event.sh@25 -- # waitforlisten 116853 /var/tmp/spdk-nbd.sock 00:12:57.838 11:57:03 -- common/autotest_common.sh@829 -- # '[' -z 116853 ']' 00:12:57.838 11:57:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:57.838 11:57:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:57.838 11:57:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:57.838 11:57:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.838 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 [2024-11-29 11:57:03.280208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:57.838 [2024-11-29 11:57:03.280450] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116853 ] 00:12:58.096 [2024-11-29 11:57:03.428832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:58.096 [2024-11-29 11:57:03.515630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.096 [2024-11-29 11:57:03.515637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.041 11:57:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.041 11:57:04 -- common/autotest_common.sh@862 -- # return 0 00:12:59.041 11:57:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:59.041 Malloc0 00:12:59.299 11:57:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:59.556 Malloc1 00:12:59.556 11:57:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@12 -- # local i 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:59.556 11:57:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:59.813 /dev/nbd0 00:12:59.813 11:57:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.813 11:57:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.813 11:57:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:59.813 11:57:05 -- common/autotest_common.sh@867 -- # local i 00:12:59.813 11:57:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.813 11:57:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.813 11:57:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:59.813 11:57:05 -- common/autotest_common.sh@871 -- # break 00:12:59.813 11:57:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.813 11:57:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.813 11:57:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:59.813 1+0 records in 00:12:59.813 1+0 records out 00:12:59.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392764 s, 10.4 MB/s 00:12:59.813 11:57:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:59.813 11:57:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:59.813 11:57:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:59.813 11:57:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.813 11:57:05 -- common/autotest_common.sh@887 -- # return 0 00:12:59.813 11:57:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.813 11:57:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:59.813 11:57:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:00.069 /dev/nbd1 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:00.069 11:57:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:00.069 11:57:05 -- common/autotest_common.sh@867 -- # local i 00:13:00.069 11:57:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:00.069 11:57:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:00.069 11:57:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:00.069 11:57:05 -- common/autotest_common.sh@871 -- # break 00:13:00.069 11:57:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:00.069 11:57:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:00.069 11:57:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:00.069 1+0 records in 00:13:00.069 1+0 records out 00:13:00.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585107 s, 7.0 MB/s 00:13:00.069 11:57:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:00.069 11:57:05 -- common/autotest_common.sh@884 -- # size=4096 00:13:00.069 11:57:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:00.069 11:57:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:00.069 11:57:05 -- common/autotest_common.sh@887 -- # return 0 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.069 11:57:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:00.326 { 00:13:00.326 "nbd_device": "/dev/nbd0", 00:13:00.326 "bdev_name": "Malloc0" 00:13:00.326 }, 00:13:00.326 { 00:13:00.326 "nbd_device": "/dev/nbd1", 00:13:00.326 "bdev_name": "Malloc1" 00:13:00.326 } 00:13:00.326 ]' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:00.326 { 00:13:00.326 "nbd_device": "/dev/nbd0", 00:13:00.326 "bdev_name": "Malloc0" 00:13:00.326 }, 00:13:00.326 { 00:13:00.326 "nbd_device": "/dev/nbd1", 00:13:00.326 "bdev_name": "Malloc1" 00:13:00.326 } 00:13:00.326 ]' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:00.326 /dev/nbd1' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:00.326 /dev/nbd1' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@65 -- # count=2 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@95 -- # count=2 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:00.326 256+0 records in 00:13:00.326 256+0 records out 00:13:00.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00751516 s, 140 MB/s 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:00.326 256+0 records in 00:13:00.326 256+0 records out 00:13:00.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262674 s, 39.9 MB/s 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:00.326 256+0 records in 00:13:00.326 256+0 records out 00:13:00.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258334 s, 40.6 MB/s 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.326 11:57:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@51 -- # local i 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.584 11:57:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@41 -- # break 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.584 11:57:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@41 -- # break 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.149 11:57:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@65 -- # true 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@65 -- # count=0 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@104 -- # count=0 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:01.409 11:57:06 -- bdev/nbd_common.sh@109 -- # return 0 00:13:01.409 11:57:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:01.672 11:57:06 -- event/event.sh@35 -- # sleep 3 00:13:01.672 [2024-11-29 11:57:07.151898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:01.931 [2024-11-29 11:57:07.226077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.931 [2024-11-29 11:57:07.226087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.931 [2024-11-29 11:57:07.280124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:01.931 [2024-11-29 11:57:07.280570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:04.459 spdk_app_start Round 1 00:13:04.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:04.459 11:57:09 -- event/event.sh@23 -- # for i in {0..2} 00:13:04.459 11:57:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:13:04.459 11:57:09 -- event/event.sh@25 -- # waitforlisten 116853 /var/tmp/spdk-nbd.sock 00:13:04.459 11:57:09 -- common/autotest_common.sh@829 -- # '[' -z 116853 ']' 00:13:04.459 11:57:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:04.459 11:57:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.459 11:57:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:04.459 11:57:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.459 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.717 11:57:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.717 11:57:10 -- common/autotest_common.sh@862 -- # return 0 00:13:04.717 11:57:10 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:04.975 Malloc0 00:13:05.234 11:57:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:05.493 Malloc1 00:13:05.493 11:57:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@12 -- # local i 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.493 11:57:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:05.751 /dev/nbd0 00:13:05.751 11:57:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.751 11:57:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.751 11:57:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:05.751 11:57:11 -- common/autotest_common.sh@867 -- # local i 00:13:05.751 11:57:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:05.751 11:57:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:05.751 11:57:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:05.751 11:57:11 -- common/autotest_common.sh@871 -- # break 00:13:05.751 11:57:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:05.751 11:57:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:05.751 11:57:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:05.751 1+0 records in 00:13:05.751 1+0 records out 00:13:05.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627055 s, 6.5 MB/s 00:13:05.751 11:57:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:05.751 11:57:11 -- common/autotest_common.sh@884 -- # size=4096 00:13:05.751 11:57:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:05.751 11:57:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:05.751 11:57:11 -- common/autotest_common.sh@887 -- # return 0 00:13:05.751 11:57:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.751 11:57:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.751 11:57:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:06.010 /dev/nbd1 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.010 11:57:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:06.010 11:57:11 -- common/autotest_common.sh@867 -- # local i 00:13:06.010 11:57:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:06.010 11:57:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:06.010 11:57:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:06.010 11:57:11 -- common/autotest_common.sh@871 -- # break 00:13:06.010 11:57:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:06.010 11:57:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:06.010 11:57:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:06.010 1+0 records in 00:13:06.010 1+0 records out 00:13:06.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515902 s, 7.9 MB/s 00:13:06.010 11:57:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:06.010 11:57:11 -- common/autotest_common.sh@884 -- # size=4096 00:13:06.010 11:57:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:06.010 11:57:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:06.010 11:57:11 -- common/autotest_common.sh@887 -- # return 0 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.010 11:57:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:06.268 { 00:13:06.268 "nbd_device": "/dev/nbd0", 00:13:06.268 "bdev_name": "Malloc0" 00:13:06.268 }, 00:13:06.268 { 00:13:06.268 "nbd_device": "/dev/nbd1", 00:13:06.268 "bdev_name": "Malloc1" 00:13:06.268 } 00:13:06.268 ]' 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:06.268 { 00:13:06.268 "nbd_device": "/dev/nbd0", 00:13:06.268 "bdev_name": "Malloc0" 00:13:06.268 }, 00:13:06.268 { 00:13:06.268 "nbd_device": "/dev/nbd1", 00:13:06.268 "bdev_name": "Malloc1" 00:13:06.268 } 00:13:06.268 ]' 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:06.268 /dev/nbd1' 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:06.268 /dev/nbd1' 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@65 -- # count=2 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@95 -- # count=2 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:06.268 11:57:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:06.269 256+0 records in 00:13:06.269 256+0 records out 00:13:06.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00819217 s, 128 MB/s 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:06.269 256+0 records in 00:13:06.269 256+0 records out 00:13:06.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262504 s, 39.9 MB/s 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:06.269 256+0 records in 00:13:06.269 256+0 records out 00:13:06.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295898 s, 35.4 MB/s 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@51 -- # local i 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.269 11:57:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@41 -- # break 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.527 11:57:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:06.785 11:57:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@41 -- # break 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.044 11:57:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@65 -- # true 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@65 -- # count=0 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@104 -- # count=0 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:07.302 11:57:12 -- bdev/nbd_common.sh@109 -- # return 0 00:13:07.302 11:57:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:07.562 11:57:12 -- event/event.sh@35 -- # sleep 3 00:13:07.820 [2024-11-29 11:57:13.111564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:07.820 [2024-11-29 11:57:13.183749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.820 [2024-11-29 11:57:13.183761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.820 [2024-11-29 11:57:13.236601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:07.820 [2024-11-29 11:57:13.236948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:11.105 spdk_app_start Round 2 00:13:11.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:11.105 11:57:15 -- event/event.sh@23 -- # for i in {0..2} 00:13:11.105 11:57:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:13:11.105 11:57:15 -- event/event.sh@25 -- # waitforlisten 116853 /var/tmp/spdk-nbd.sock 00:13:11.105 11:57:15 -- common/autotest_common.sh@829 -- # '[' -z 116853 ']' 00:13:11.105 11:57:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:11.105 11:57:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.105 11:57:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:11.105 11:57:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.105 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:13:11.105 11:57:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.105 11:57:16 -- common/autotest_common.sh@862 -- # return 0 00:13:11.105 11:57:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:11.105 Malloc0 00:13:11.105 11:57:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:11.364 Malloc1 00:13:11.364 11:57:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@12 -- # local i 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.364 11:57:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:11.702 /dev/nbd0 00:13:11.702 11:57:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.702 11:57:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.702 11:57:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:11.702 11:57:17 -- common/autotest_common.sh@867 -- # local i 00:13:11.702 11:57:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:11.702 11:57:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:11.702 11:57:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:11.702 11:57:17 -- common/autotest_common.sh@871 -- # break 00:13:11.702 11:57:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:11.702 11:57:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:11.702 11:57:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:11.702 1+0 records in 00:13:11.702 1+0 records out 00:13:11.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611097 s, 6.7 MB/s 00:13:11.702 11:57:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:11.702 11:57:17 -- common/autotest_common.sh@884 -- # size=4096 00:13:11.702 11:57:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:11.702 11:57:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:11.702 11:57:17 -- common/autotest_common.sh@887 -- # return 0 00:13:11.702 11:57:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.702 11:57:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.702 11:57:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:11.960 /dev/nbd1 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.960 11:57:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:11.960 11:57:17 -- common/autotest_common.sh@867 -- # local i 00:13:11.960 11:57:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:11.960 11:57:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:11.960 11:57:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:11.960 11:57:17 -- common/autotest_common.sh@871 -- # break 00:13:11.960 11:57:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:11.960 11:57:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:11.960 11:57:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:11.960 1+0 records in 00:13:11.960 1+0 records out 00:13:11.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581174 s, 7.0 MB/s 00:13:11.960 11:57:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:11.960 11:57:17 -- common/autotest_common.sh@884 -- # size=4096 00:13:11.960 11:57:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:11.960 11:57:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:11.960 11:57:17 -- common/autotest_common.sh@887 -- # return 0 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.960 11:57:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:12.219 { 00:13:12.219 "nbd_device": "/dev/nbd0", 00:13:12.219 "bdev_name": "Malloc0" 00:13:12.219 }, 00:13:12.219 { 00:13:12.219 "nbd_device": "/dev/nbd1", 00:13:12.219 "bdev_name": "Malloc1" 00:13:12.219 } 00:13:12.219 ]' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:12.219 { 00:13:12.219 "nbd_device": "/dev/nbd0", 00:13:12.219 "bdev_name": "Malloc0" 00:13:12.219 }, 00:13:12.219 { 00:13:12.219 "nbd_device": "/dev/nbd1", 00:13:12.219 "bdev_name": "Malloc1" 00:13:12.219 } 00:13:12.219 ]' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:12.219 /dev/nbd1' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:12.219 /dev/nbd1' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@65 -- # count=2 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@95 -- # count=2 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:12.219 256+0 records in 00:13:12.219 256+0 records out 00:13:12.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00791635 s, 132 MB/s 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:12.219 256+0 records in 00:13:12.219 256+0 records out 00:13:12.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285784 s, 36.7 MB/s 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.219 11:57:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:12.478 256+0 records in 00:13:12.478 256+0 records out 00:13:12.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280898 s, 37.3 MB/s 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@51 -- # local i 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.478 11:57:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@41 -- # break 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.735 11:57:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@41 -- # break 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.994 11:57:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:13.253 11:57:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:13.253 11:57:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:13.253 11:57:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@65 -- # true 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@65 -- # count=0 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@104 -- # count=0 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:13.512 11:57:18 -- bdev/nbd_common.sh@109 -- # return 0 00:13:13.512 11:57:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:13.770 11:57:19 -- event/event.sh@35 -- # sleep 3 00:13:13.770 [2024-11-29 11:57:19.269734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.029 [2024-11-29 11:57:19.346878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.029 [2024-11-29 11:57:19.346905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.029 [2024-11-29 11:57:19.403865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:14.029 [2024-11-29 11:57:19.404289] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:17.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:17.313 11:57:22 -- event/event.sh@38 -- # waitforlisten 116853 /var/tmp/spdk-nbd.sock 00:13:17.313 11:57:22 -- common/autotest_common.sh@829 -- # '[' -z 116853 ']' 00:13:17.313 11:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:17.313 11:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.313 11:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:17.313 11:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.313 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 11:57:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.313 11:57:22 -- common/autotest_common.sh@862 -- # return 0 00:13:17.313 11:57:22 -- event/event.sh@39 -- # killprocess 116853 00:13:17.313 11:57:22 -- common/autotest_common.sh@936 -- # '[' -z 116853 ']' 00:13:17.313 11:57:22 -- common/autotest_common.sh@940 -- # kill -0 116853 00:13:17.313 11:57:22 -- common/autotest_common.sh@941 -- # uname 00:13:17.313 11:57:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.313 11:57:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116853 00:13:17.313 11:57:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:17.313 11:57:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:17.313 11:57:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116853' 00:13:17.313 killing process with pid 116853 00:13:17.313 11:57:22 -- common/autotest_common.sh@955 -- # kill 116853 00:13:17.313 11:57:22 -- common/autotest_common.sh@960 -- # wait 116853 00:13:17.313 spdk_app_start is called in Round 0. 00:13:17.313 Shutdown signal received, stop current app iteration 00:13:17.313 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:13:17.313 spdk_app_start is called in Round 1. 00:13:17.313 Shutdown signal received, stop current app iteration 00:13:17.313 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:13:17.313 spdk_app_start is called in Round 2. 00:13:17.313 Shutdown signal received, stop current app iteration 00:13:17.313 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:13:17.313 spdk_app_start is called in Round 3. 00:13:17.313 Shutdown signal received, stop current app iteration 00:13:17.313 11:57:22 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:13:17.313 ************************************ 00:13:17.313 END TEST app_repeat 00:13:17.313 ************************************ 00:13:17.313 11:57:22 -- event/event.sh@42 -- # return 0 00:13:17.313 00:13:17.313 real 0m19.433s 00:13:17.313 user 0m44.007s 00:13:17.313 sys 0m2.724s 00:13:17.313 11:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:17.313 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 11:57:22 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:13:17.313 11:57:22 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:17.313 11:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.313 11:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.313 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 ************************************ 00:13:17.313 START TEST cpu_locks 00:13:17.313 ************************************ 00:13:17.313 11:57:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:17.313 * Looking for test storage... 00:13:17.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:17.313 11:57:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:17.313 11:57:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:17.313 11:57:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:17.572 11:57:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:17.572 11:57:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:17.572 11:57:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:17.572 11:57:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:17.572 11:57:22 -- scripts/common.sh@335 -- # IFS=.-: 00:13:17.572 11:57:22 -- scripts/common.sh@335 -- # read -ra ver1 00:13:17.572 11:57:22 -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.572 11:57:22 -- scripts/common.sh@336 -- # read -ra ver2 00:13:17.572 11:57:22 -- scripts/common.sh@337 -- # local 'op=<' 00:13:17.572 11:57:22 -- scripts/common.sh@339 -- # ver1_l=2 00:13:17.572 11:57:22 -- scripts/common.sh@340 -- # ver2_l=1 00:13:17.572 11:57:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:17.572 11:57:22 -- scripts/common.sh@343 -- # case "$op" in 00:13:17.572 11:57:22 -- scripts/common.sh@344 -- # : 1 00:13:17.572 11:57:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:17.572 11:57:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.572 11:57:22 -- scripts/common.sh@364 -- # decimal 1 00:13:17.572 11:57:22 -- scripts/common.sh@352 -- # local d=1 00:13:17.572 11:57:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.572 11:57:22 -- scripts/common.sh@354 -- # echo 1 00:13:17.572 11:57:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:17.572 11:57:22 -- scripts/common.sh@365 -- # decimal 2 00:13:17.572 11:57:22 -- scripts/common.sh@352 -- # local d=2 00:13:17.572 11:57:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.572 11:57:22 -- scripts/common.sh@354 -- # echo 2 00:13:17.572 11:57:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:17.572 11:57:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:17.572 11:57:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:17.572 11:57:22 -- scripts/common.sh@367 -- # return 0 00:13:17.572 11:57:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.572 11:57:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:17.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.572 --rc genhtml_branch_coverage=1 00:13:17.572 --rc genhtml_function_coverage=1 00:13:17.572 --rc genhtml_legend=1 00:13:17.572 --rc geninfo_all_blocks=1 00:13:17.572 --rc geninfo_unexecuted_blocks=1 00:13:17.572 00:13:17.572 ' 00:13:17.572 11:57:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:17.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.572 --rc genhtml_branch_coverage=1 00:13:17.572 --rc genhtml_function_coverage=1 00:13:17.572 --rc genhtml_legend=1 00:13:17.572 --rc geninfo_all_blocks=1 00:13:17.572 --rc geninfo_unexecuted_blocks=1 00:13:17.572 00:13:17.572 ' 00:13:17.572 11:57:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:17.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.572 --rc genhtml_branch_coverage=1 00:13:17.572 --rc genhtml_function_coverage=1 00:13:17.572 --rc genhtml_legend=1 00:13:17.572 --rc geninfo_all_blocks=1 00:13:17.572 --rc geninfo_unexecuted_blocks=1 00:13:17.572 00:13:17.572 ' 00:13:17.572 11:57:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:17.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.572 --rc genhtml_branch_coverage=1 00:13:17.572 --rc genhtml_function_coverage=1 00:13:17.572 --rc genhtml_legend=1 00:13:17.572 --rc geninfo_all_blocks=1 00:13:17.572 --rc geninfo_unexecuted_blocks=1 00:13:17.572 00:13:17.572 ' 00:13:17.572 11:57:22 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:13:17.572 11:57:22 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:13:17.572 11:57:22 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:13:17.572 11:57:22 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:13:17.572 11:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.572 11:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.572 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 ************************************ 00:13:17.572 START TEST default_locks 00:13:17.572 ************************************ 00:13:17.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.572 11:57:22 -- common/autotest_common.sh@1114 -- # default_locks 00:13:17.572 11:57:22 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=117393 00:13:17.572 11:57:22 -- event/cpu_locks.sh@47 -- # waitforlisten 117393 00:13:17.572 11:57:22 -- common/autotest_common.sh@829 -- # '[' -z 117393 ']' 00:13:17.572 11:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.572 11:57:22 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:17.572 11:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.572 11:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.572 11:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.573 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.573 [2024-11-29 11:57:23.000261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:17.573 [2024-11-29 11:57:23.000792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117393 ] 00:13:17.830 [2024-11-29 11:57:23.152999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.831 [2024-11-29 11:57:23.257265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:17.831 [2024-11-29 11:57:23.257835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.766 11:57:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.766 11:57:23 -- common/autotest_common.sh@862 -- # return 0 00:13:18.766 11:57:23 -- event/cpu_locks.sh@49 -- # locks_exist 117393 00:13:18.766 11:57:23 -- event/cpu_locks.sh@22 -- # lslocks -p 117393 00:13:18.766 11:57:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:18.766 11:57:24 -- event/cpu_locks.sh@50 -- # killprocess 117393 00:13:18.766 11:57:24 -- common/autotest_common.sh@936 -- # '[' -z 117393 ']' 00:13:18.766 11:57:24 -- common/autotest_common.sh@940 -- # kill -0 117393 00:13:18.766 11:57:24 -- common/autotest_common.sh@941 -- # uname 00:13:18.766 11:57:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.766 11:57:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117393 00:13:18.766 killing process with pid 117393 00:13:18.766 11:57:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.766 11:57:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.766 11:57:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117393' 00:13:18.766 11:57:24 -- common/autotest_common.sh@955 -- # kill 117393 00:13:18.766 11:57:24 -- common/autotest_common.sh@960 -- # wait 117393 00:13:19.334 11:57:24 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 117393 00:13:19.334 11:57:24 -- common/autotest_common.sh@650 -- # local es=0 00:13:19.334 11:57:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117393 00:13:19.334 11:57:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:19.334 11:57:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.334 11:57:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:19.334 11:57:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.334 11:57:24 -- common/autotest_common.sh@653 -- # waitforlisten 117393 00:13:19.334 11:57:24 -- common/autotest_common.sh@829 -- # '[' -z 117393 ']' 00:13:19.334 11:57:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.334 11:57:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.334 11:57:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.334 11:57:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.334 ERROR: process (pid: 117393) is no longer running 00:13:19.334 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.334 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117393) - No such process 00:13:19.334 11:57:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.334 11:57:24 -- common/autotest_common.sh@862 -- # return 1 00:13:19.334 11:57:24 -- common/autotest_common.sh@653 -- # es=1 00:13:19.334 11:57:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.334 11:57:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.334 11:57:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.334 11:57:24 -- event/cpu_locks.sh@54 -- # no_locks 00:13:19.334 11:57:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:19.334 11:57:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:13:19.334 11:57:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:19.334 00:13:19.334 real 0m1.753s 00:13:19.334 user 0m1.849s 00:13:19.334 sys 0m0.533s 00:13:19.334 11:57:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.334 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.334 ************************************ 00:13:19.334 END TEST default_locks 00:13:19.334 ************************************ 00:13:19.334 11:57:24 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:13:19.334 11:57:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.334 11:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.334 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.334 ************************************ 00:13:19.334 START TEST default_locks_via_rpc 00:13:19.334 ************************************ 00:13:19.334 11:57:24 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:13:19.334 11:57:24 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=117441 00:13:19.334 11:57:24 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:19.334 11:57:24 -- event/cpu_locks.sh@63 -- # waitforlisten 117441 00:13:19.334 11:57:24 -- common/autotest_common.sh@829 -- # '[' -z 117441 ']' 00:13:19.334 11:57:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.334 11:57:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.334 11:57:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.334 11:57:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.334 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.334 [2024-11-29 11:57:24.809419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:19.334 [2024-11-29 11:57:24.809942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117441 ] 00:13:19.593 [2024-11-29 11:57:24.957792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.593 [2024-11-29 11:57:25.053865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.593 [2024-11-29 11:57:25.054360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.533 11:57:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.533 11:57:25 -- common/autotest_common.sh@862 -- # return 0 00:13:20.533 11:57:25 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:20.533 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.533 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:13:20.533 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.533 11:57:25 -- event/cpu_locks.sh@67 -- # no_locks 00:13:20.533 11:57:25 -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:20.533 11:57:25 -- event/cpu_locks.sh@26 -- # local lock_files 00:13:20.533 11:57:25 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:20.533 11:57:25 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:20.533 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.533 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:13:20.533 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.533 11:57:25 -- event/cpu_locks.sh@71 -- # locks_exist 117441 00:13:20.533 11:57:25 -- event/cpu_locks.sh@22 -- # lslocks -p 117441 00:13:20.533 11:57:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:20.792 11:57:26 -- event/cpu_locks.sh@73 -- # killprocess 117441 00:13:20.792 11:57:26 -- common/autotest_common.sh@936 -- # '[' -z 117441 ']' 00:13:20.792 11:57:26 -- common/autotest_common.sh@940 -- # kill -0 117441 00:13:20.792 11:57:26 -- common/autotest_common.sh@941 -- # uname 00:13:20.792 11:57:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:20.792 11:57:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117441 00:13:20.792 11:57:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:20.792 11:57:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:20.792 11:57:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117441' 00:13:20.792 killing process with pid 117441 00:13:20.792 11:57:26 -- common/autotest_common.sh@955 -- # kill 117441 00:13:20.792 11:57:26 -- common/autotest_common.sh@960 -- # wait 117441 00:13:21.358 00:13:21.358 real 0m1.817s 00:13:21.358 user 0m1.937s 00:13:21.358 sys 0m0.567s 00:13:21.358 11:57:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:21.358 11:57:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.358 ************************************ 00:13:21.358 END TEST default_locks_via_rpc 00:13:21.358 ************************************ 00:13:21.358 11:57:26 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:21.358 11:57:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:21.358 11:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.358 11:57:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.358 ************************************ 00:13:21.358 START TEST non_locking_app_on_locked_coremask 00:13:21.358 ************************************ 00:13:21.358 11:57:26 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:13:21.358 11:57:26 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=117501 00:13:21.358 11:57:26 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:21.358 11:57:26 -- event/cpu_locks.sh@81 -- # waitforlisten 117501 /var/tmp/spdk.sock 00:13:21.358 11:57:26 -- common/autotest_common.sh@829 -- # '[' -z 117501 ']' 00:13:21.358 11:57:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.358 11:57:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.358 11:57:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.358 11:57:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.358 11:57:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.358 [2024-11-29 11:57:26.687373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:21.358 [2024-11-29 11:57:26.687877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117501 ] 00:13:21.358 [2024-11-29 11:57:26.833818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.616 [2024-11-29 11:57:26.931471] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:21.616 [2024-11-29 11:57:26.932048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:22.551 11:57:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.551 11:57:27 -- common/autotest_common.sh@862 -- # return 0 00:13:22.551 11:57:27 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=117522 00:13:22.551 11:57:27 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:22.551 11:57:27 -- event/cpu_locks.sh@85 -- # waitforlisten 117522 /var/tmp/spdk2.sock 00:13:22.551 11:57:27 -- common/autotest_common.sh@829 -- # '[' -z 117522 ']' 00:13:22.551 11:57:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:22.551 11:57:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.551 11:57:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:22.551 11:57:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.551 11:57:27 -- common/autotest_common.sh@10 -- # set +x 00:13:22.551 [2024-11-29 11:57:27.795085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:22.551 [2024-11-29 11:57:27.795623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117522 ] 00:13:22.551 [2024-11-29 11:57:27.940696] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:22.551 [2024-11-29 11:57:27.940819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.809 [2024-11-29 11:57:28.147920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.809 [2024-11-29 11:57:28.148214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.376 11:57:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.376 11:57:28 -- common/autotest_common.sh@862 -- # return 0 00:13:23.376 11:57:28 -- event/cpu_locks.sh@87 -- # locks_exist 117501 00:13:23.376 11:57:28 -- event/cpu_locks.sh@22 -- # lslocks -p 117501 00:13:23.376 11:57:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:23.942 11:57:29 -- event/cpu_locks.sh@89 -- # killprocess 117501 00:13:23.942 11:57:29 -- common/autotest_common.sh@936 -- # '[' -z 117501 ']' 00:13:23.942 11:57:29 -- common/autotest_common.sh@940 -- # kill -0 117501 00:13:23.942 11:57:29 -- common/autotest_common.sh@941 -- # uname 00:13:23.942 11:57:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.942 11:57:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117501 00:13:23.942 11:57:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.942 11:57:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.942 11:57:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117501' 00:13:23.942 killing process with pid 117501 00:13:23.942 11:57:29 -- common/autotest_common.sh@955 -- # kill 117501 00:13:23.942 11:57:29 -- common/autotest_common.sh@960 -- # wait 117501 00:13:24.876 11:57:30 -- event/cpu_locks.sh@90 -- # killprocess 117522 00:13:24.876 11:57:30 -- common/autotest_common.sh@936 -- # '[' -z 117522 ']' 00:13:24.876 11:57:30 -- common/autotest_common.sh@940 -- # kill -0 117522 00:13:24.876 11:57:30 -- common/autotest_common.sh@941 -- # uname 00:13:24.876 11:57:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:24.876 11:57:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117522 00:13:24.876 11:57:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:24.876 11:57:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:24.876 killing process with pid 117522 00:13:24.876 11:57:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117522' 00:13:24.876 11:57:30 -- common/autotest_common.sh@955 -- # kill 117522 00:13:24.876 11:57:30 -- common/autotest_common.sh@960 -- # wait 117522 00:13:25.443 00:13:25.443 real 0m4.098s 00:13:25.443 user 0m4.623s 00:13:25.443 sys 0m1.154s 00:13:25.443 11:57:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:25.443 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 ************************************ 00:13:25.443 END TEST non_locking_app_on_locked_coremask 00:13:25.443 ************************************ 00:13:25.443 11:57:30 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:25.443 11:57:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.443 11:57:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.443 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 ************************************ 00:13:25.443 START TEST locking_app_on_unlocked_coremask 00:13:25.443 ************************************ 00:13:25.443 11:57:30 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:13:25.443 11:57:30 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=117596 00:13:25.443 11:57:30 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:25.443 11:57:30 -- event/cpu_locks.sh@99 -- # waitforlisten 117596 /var/tmp/spdk.sock 00:13:25.443 11:57:30 -- common/autotest_common.sh@829 -- # '[' -z 117596 ']' 00:13:25.443 11:57:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.443 11:57:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.443 11:57:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.443 11:57:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.443 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 [2024-11-29 11:57:30.852868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:25.443 [2024-11-29 11:57:30.853437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117596 ] 00:13:25.702 [2024-11-29 11:57:31.009240] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:25.702 [2024-11-29 11:57:31.009653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.702 [2024-11-29 11:57:31.101953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:25.702 [2024-11-29 11:57:31.102529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.638 11:57:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.638 11:57:31 -- common/autotest_common.sh@862 -- # return 0 00:13:26.638 11:57:31 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:26.638 11:57:31 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117617 00:13:26.638 11:57:31 -- event/cpu_locks.sh@103 -- # waitforlisten 117617 /var/tmp/spdk2.sock 00:13:26.638 11:57:31 -- common/autotest_common.sh@829 -- # '[' -z 117617 ']' 00:13:26.638 11:57:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:26.638 11:57:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.638 11:57:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:26.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:26.638 11:57:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.638 11:57:31 -- common/autotest_common.sh@10 -- # set +x 00:13:26.638 [2024-11-29 11:57:31.854496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:26.638 [2024-11-29 11:57:31.854955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117617 ] 00:13:26.638 [2024-11-29 11:57:32.004119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.897 [2024-11-29 11:57:32.197587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.897 [2024-11-29 11:57:32.197889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.464 11:57:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.464 11:57:32 -- common/autotest_common.sh@862 -- # return 0 00:13:27.464 11:57:32 -- event/cpu_locks.sh@105 -- # locks_exist 117617 00:13:27.464 11:57:32 -- event/cpu_locks.sh@22 -- # lslocks -p 117617 00:13:27.464 11:57:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:28.032 11:57:33 -- event/cpu_locks.sh@107 -- # killprocess 117596 00:13:28.032 11:57:33 -- common/autotest_common.sh@936 -- # '[' -z 117596 ']' 00:13:28.032 11:57:33 -- common/autotest_common.sh@940 -- # kill -0 117596 00:13:28.032 11:57:33 -- common/autotest_common.sh@941 -- # uname 00:13:28.032 11:57:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.032 11:57:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117596 00:13:28.032 11:57:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:28.032 11:57:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:28.032 11:57:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117596' 00:13:28.032 killing process with pid 117596 00:13:28.032 11:57:33 -- common/autotest_common.sh@955 -- # kill 117596 00:13:28.032 11:57:33 -- common/autotest_common.sh@960 -- # wait 117596 00:13:28.968 11:57:34 -- event/cpu_locks.sh@108 -- # killprocess 117617 00:13:28.968 11:57:34 -- common/autotest_common.sh@936 -- # '[' -z 117617 ']' 00:13:28.968 11:57:34 -- common/autotest_common.sh@940 -- # kill -0 117617 00:13:28.968 11:57:34 -- common/autotest_common.sh@941 -- # uname 00:13:28.968 11:57:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.968 11:57:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117617 00:13:28.968 killing process with pid 117617 00:13:28.968 11:57:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:28.968 11:57:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:28.968 11:57:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117617' 00:13:28.968 11:57:34 -- common/autotest_common.sh@955 -- # kill 117617 00:13:28.968 11:57:34 -- common/autotest_common.sh@960 -- # wait 117617 00:13:29.227 ************************************ 00:13:29.227 END TEST locking_app_on_unlocked_coremask 00:13:29.227 ************************************ 00:13:29.227 00:13:29.227 real 0m3.952s 00:13:29.227 user 0m4.325s 00:13:29.227 sys 0m1.133s 00:13:29.227 11:57:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.227 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.485 11:57:34 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:29.485 11:57:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:29.485 11:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.485 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.485 ************************************ 00:13:29.485 START TEST locking_app_on_locked_coremask 00:13:29.485 ************************************ 00:13:29.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.485 11:57:34 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:13:29.485 11:57:34 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117694 00:13:29.485 11:57:34 -- event/cpu_locks.sh@116 -- # waitforlisten 117694 /var/tmp/spdk.sock 00:13:29.485 11:57:34 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:29.485 11:57:34 -- common/autotest_common.sh@829 -- # '[' -z 117694 ']' 00:13:29.485 11:57:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.485 11:57:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.485 11:57:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.485 11:57:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.485 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.485 [2024-11-29 11:57:34.859378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:29.485 [2024-11-29 11:57:34.859680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117694 ] 00:13:29.744 [2024-11-29 11:57:35.000615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.744 [2024-11-29 11:57:35.093361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:29.744 [2024-11-29 11:57:35.093885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.678 11:57:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.678 11:57:35 -- common/autotest_common.sh@862 -- # return 0 00:13:30.678 11:57:35 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117709 00:13:30.678 11:57:35 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117709 /var/tmp/spdk2.sock 00:13:30.678 11:57:35 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:30.678 11:57:35 -- common/autotest_common.sh@650 -- # local es=0 00:13:30.678 11:57:35 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117709 /var/tmp/spdk2.sock 00:13:30.678 11:57:35 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:30.678 11:57:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.678 11:57:35 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:30.679 11:57:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.679 11:57:35 -- common/autotest_common.sh@653 -- # waitforlisten 117709 /var/tmp/spdk2.sock 00:13:30.679 11:57:35 -- common/autotest_common.sh@829 -- # '[' -z 117709 ']' 00:13:30.679 11:57:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:30.679 11:57:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.679 11:57:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:30.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:30.679 11:57:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.679 11:57:35 -- common/autotest_common.sh@10 -- # set +x 00:13:30.679 [2024-11-29 11:57:35.904434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:30.679 [2024-11-29 11:57:35.905833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117709 ] 00:13:30.679 [2024-11-29 11:57:36.060574] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117694 has claimed it. 00:13:30.679 [2024-11-29 11:57:36.060676] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:31.244 ERROR: process (pid: 117709) is no longer running 00:13:31.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117709) - No such process 00:13:31.244 11:57:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.244 11:57:36 -- common/autotest_common.sh@862 -- # return 1 00:13:31.244 11:57:36 -- common/autotest_common.sh@653 -- # es=1 00:13:31.244 11:57:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.244 11:57:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.244 11:57:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.244 11:57:36 -- event/cpu_locks.sh@122 -- # locks_exist 117694 00:13:31.244 11:57:36 -- event/cpu_locks.sh@22 -- # lslocks -p 117694 00:13:31.244 11:57:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:31.501 11:57:36 -- event/cpu_locks.sh@124 -- # killprocess 117694 00:13:31.501 11:57:36 -- common/autotest_common.sh@936 -- # '[' -z 117694 ']' 00:13:31.502 11:57:36 -- common/autotest_common.sh@940 -- # kill -0 117694 00:13:31.502 11:57:36 -- common/autotest_common.sh@941 -- # uname 00:13:31.502 11:57:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:31.502 11:57:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117694 00:13:31.502 11:57:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:31.502 11:57:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:31.502 11:57:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117694' 00:13:31.502 killing process with pid 117694 00:13:31.502 11:57:36 -- common/autotest_common.sh@955 -- # kill 117694 00:13:31.502 11:57:36 -- common/autotest_common.sh@960 -- # wait 117694 00:13:32.067 00:13:32.067 real 0m2.554s 00:13:32.067 user 0m2.927s 00:13:32.067 sys 0m0.664s 00:13:32.067 11:57:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:32.067 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 ************************************ 00:13:32.067 END TEST locking_app_on_locked_coremask 00:13:32.067 ************************************ 00:13:32.067 11:57:37 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:32.067 11:57:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:32.067 11:57:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.067 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 ************************************ 00:13:32.067 START TEST locking_overlapped_coremask 00:13:32.067 ************************************ 00:13:32.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.067 11:57:37 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:13:32.067 11:57:37 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117766 00:13:32.067 11:57:37 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:32.067 11:57:37 -- event/cpu_locks.sh@133 -- # waitforlisten 117766 /var/tmp/spdk.sock 00:13:32.067 11:57:37 -- common/autotest_common.sh@829 -- # '[' -z 117766 ']' 00:13:32.067 11:57:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.067 11:57:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.067 11:57:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.067 11:57:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.067 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 [2024-11-29 11:57:37.449298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:32.067 [2024-11-29 11:57:37.450257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117766 ] 00:13:32.336 [2024-11-29 11:57:37.608178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.336 [2024-11-29 11:57:37.692496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.336 [2024-11-29 11:57:37.693130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.336 [2024-11-29 11:57:37.693272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.336 [2024-11-29 11:57:37.693278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.267 11:57:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.267 11:57:38 -- common/autotest_common.sh@862 -- # return 0 00:13:33.267 11:57:38 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117789 00:13:33.267 11:57:38 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117789 /var/tmp/spdk2.sock 00:13:33.267 11:57:38 -- common/autotest_common.sh@650 -- # local es=0 00:13:33.267 11:57:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117789 /var/tmp/spdk2.sock 00:13:33.267 11:57:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:33.267 11:57:38 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:33.267 11:57:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.267 11:57:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:33.267 11:57:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.267 11:57:38 -- common/autotest_common.sh@653 -- # waitforlisten 117789 /var/tmp/spdk2.sock 00:13:33.267 11:57:38 -- common/autotest_common.sh@829 -- # '[' -z 117789 ']' 00:13:33.267 11:57:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:33.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:33.267 11:57:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.267 11:57:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:33.267 11:57:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.267 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:13:33.267 [2024-11-29 11:57:38.500887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:33.267 [2024-11-29 11:57:38.501346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117789 ] 00:13:33.267 [2024-11-29 11:57:38.665871] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117766 has claimed it. 00:13:33.267 [2024-11-29 11:57:38.665982] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:33.830 ERROR: process (pid: 117789) is no longer running 00:13:33.830 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117789) - No such process 00:13:33.830 11:57:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.831 11:57:39 -- common/autotest_common.sh@862 -- # return 1 00:13:33.831 11:57:39 -- common/autotest_common.sh@653 -- # es=1 00:13:33.831 11:57:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:33.831 11:57:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:33.831 11:57:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:33.831 11:57:39 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:33.831 11:57:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:33.831 11:57:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:33.831 11:57:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:33.831 11:57:39 -- event/cpu_locks.sh@141 -- # killprocess 117766 00:13:33.831 11:57:39 -- common/autotest_common.sh@936 -- # '[' -z 117766 ']' 00:13:33.831 11:57:39 -- common/autotest_common.sh@940 -- # kill -0 117766 00:13:33.831 11:57:39 -- common/autotest_common.sh@941 -- # uname 00:13:33.831 11:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:33.831 11:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117766 00:13:33.831 11:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:33.831 killing process with pid 117766 00:13:33.831 11:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:33.831 11:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117766' 00:13:33.831 11:57:39 -- common/autotest_common.sh@955 -- # kill 117766 00:13:33.831 11:57:39 -- common/autotest_common.sh@960 -- # wait 117766 00:13:34.397 ************************************ 00:13:34.397 END TEST locking_overlapped_coremask 00:13:34.397 ************************************ 00:13:34.397 00:13:34.397 real 0m2.272s 00:13:34.397 user 0m6.173s 00:13:34.397 sys 0m0.535s 00:13:34.397 11:57:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:34.397 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:13:34.397 11:57:39 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:34.397 11:57:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:34.397 11:57:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.397 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:13:34.398 ************************************ 00:13:34.398 START TEST locking_overlapped_coremask_via_rpc 00:13:34.398 ************************************ 00:13:34.398 11:57:39 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:13:34.398 11:57:39 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117834 00:13:34.398 11:57:39 -- event/cpu_locks.sh@149 -- # waitforlisten 117834 /var/tmp/spdk.sock 00:13:34.398 11:57:39 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:34.398 11:57:39 -- common/autotest_common.sh@829 -- # '[' -z 117834 ']' 00:13:34.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.398 11:57:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.398 11:57:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.398 11:57:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.398 11:57:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.398 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:13:34.398 [2024-11-29 11:57:39.797579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:34.398 [2024-11-29 11:57:39.798220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117834 ] 00:13:34.656 [2024-11-29 11:57:39.974089] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:34.656 [2024-11-29 11:57:39.974428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:34.656 [2024-11-29 11:57:40.058231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:34.656 [2024-11-29 11:57:40.059071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.656 [2024-11-29 11:57:40.059229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.656 [2024-11-29 11:57:40.059250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.222 11:57:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.222 11:57:40 -- common/autotest_common.sh@862 -- # return 0 00:13:35.222 11:57:40 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117857 00:13:35.222 11:57:40 -- event/cpu_locks.sh@153 -- # waitforlisten 117857 /var/tmp/spdk2.sock 00:13:35.222 11:57:40 -- common/autotest_common.sh@829 -- # '[' -z 117857 ']' 00:13:35.222 11:57:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:35.222 11:57:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.222 11:57:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:35.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:35.222 11:57:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.222 11:57:40 -- common/autotest_common.sh@10 -- # set +x 00:13:35.222 11:57:40 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:35.481 [2024-11-29 11:57:40.734873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:35.481 [2024-11-29 11:57:40.735498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117857 ] 00:13:35.481 [2024-11-29 11:57:40.896205] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:35.481 [2024-11-29 11:57:40.896301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.739 [2024-11-29 11:57:41.069788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.739 [2024-11-29 11:57:41.070519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.739 [2024-11-29 11:57:41.082445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:35.739 [2024-11-29 11:57:41.082446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.306 11:57:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.306 11:57:41 -- common/autotest_common.sh@862 -- # return 0 00:13:36.306 11:57:41 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:36.306 11:57:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.306 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:36.306 11:57:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.306 11:57:41 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:36.306 11:57:41 -- common/autotest_common.sh@650 -- # local es=0 00:13:36.306 11:57:41 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:36.306 11:57:41 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:36.306 11:57:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.306 11:57:41 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:36.306 11:57:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.306 11:57:41 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:36.306 11:57:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.306 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:36.306 [2024-11-29 11:57:41.714508] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117834 has claimed it. 00:13:36.306 request: 00:13:36.306 { 00:13:36.306 "method": "framework_enable_cpumask_locks", 00:13:36.306 "req_id": 1 00:13:36.306 } 00:13:36.306 Got JSON-RPC error response 00:13:36.306 response: 00:13:36.306 { 00:13:36.306 "code": -32603, 00:13:36.306 "message": "Failed to claim CPU core: 2" 00:13:36.306 } 00:13:36.306 11:57:41 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.306 11:57:41 -- common/autotest_common.sh@653 -- # es=1 00:13:36.306 11:57:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.306 11:57:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.306 11:57:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.306 11:57:41 -- event/cpu_locks.sh@158 -- # waitforlisten 117834 /var/tmp/spdk.sock 00:13:36.306 11:57:41 -- common/autotest_common.sh@829 -- # '[' -z 117834 ']' 00:13:36.306 11:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.306 11:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.306 11:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.306 11:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.306 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:36.565 11:57:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.565 11:57:41 -- common/autotest_common.sh@862 -- # return 0 00:13:36.565 11:57:41 -- event/cpu_locks.sh@159 -- # waitforlisten 117857 /var/tmp/spdk2.sock 00:13:36.565 11:57:41 -- common/autotest_common.sh@829 -- # '[' -z 117857 ']' 00:13:36.565 11:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:36.565 11:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:36.565 11:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:36.565 11:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.565 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:36.823 11:57:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.823 11:57:42 -- common/autotest_common.sh@862 -- # return 0 00:13:36.823 11:57:42 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:36.823 11:57:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:36.823 11:57:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:36.823 11:57:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:36.823 00:13:36.823 real 0m2.546s 00:13:36.823 user 0m1.359s 00:13:36.823 sys 0m0.144s 00:13:36.823 11:57:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:36.823 11:57:42 -- common/autotest_common.sh@10 -- # set +x 00:13:36.823 ************************************ 00:13:36.823 END TEST locking_overlapped_coremask_via_rpc 00:13:36.823 ************************************ 00:13:36.823 11:57:42 -- event/cpu_locks.sh@174 -- # cleanup 00:13:36.823 11:57:42 -- event/cpu_locks.sh@15 -- # [[ -z 117834 ]] 00:13:36.823 11:57:42 -- event/cpu_locks.sh@15 -- # killprocess 117834 00:13:36.823 11:57:42 -- common/autotest_common.sh@936 -- # '[' -z 117834 ']' 00:13:36.823 11:57:42 -- common/autotest_common.sh@940 -- # kill -0 117834 00:13:36.823 11:57:42 -- common/autotest_common.sh@941 -- # uname 00:13:36.823 11:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.823 11:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117834 00:13:36.823 11:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:36.823 killing process with pid 117834 00:13:36.823 11:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:36.823 11:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117834' 00:13:36.823 11:57:42 -- common/autotest_common.sh@955 -- # kill 117834 00:13:36.823 11:57:42 -- common/autotest_common.sh@960 -- # wait 117834 00:13:37.389 11:57:42 -- event/cpu_locks.sh@16 -- # [[ -z 117857 ]] 00:13:37.390 11:57:42 -- event/cpu_locks.sh@16 -- # killprocess 117857 00:13:37.390 11:57:42 -- common/autotest_common.sh@936 -- # '[' -z 117857 ']' 00:13:37.390 11:57:42 -- common/autotest_common.sh@940 -- # kill -0 117857 00:13:37.390 11:57:42 -- common/autotest_common.sh@941 -- # uname 00:13:37.390 11:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.390 11:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117857 00:13:37.390 killing process with pid 117857 00:13:37.390 11:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:37.390 11:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:37.390 11:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117857' 00:13:37.390 11:57:42 -- common/autotest_common.sh@955 -- # kill 117857 00:13:37.390 11:57:42 -- common/autotest_common.sh@960 -- # wait 117857 00:13:37.957 11:57:43 -- event/cpu_locks.sh@18 -- # rm -f 00:13:37.957 Process with pid 117834 is not found 00:13:37.957 Process with pid 117857 is not found 00:13:37.957 11:57:43 -- event/cpu_locks.sh@1 -- # cleanup 00:13:37.957 11:57:43 -- event/cpu_locks.sh@15 -- # [[ -z 117834 ]] 00:13:37.957 11:57:43 -- event/cpu_locks.sh@15 -- # killprocess 117834 00:13:37.957 11:57:43 -- common/autotest_common.sh@936 -- # '[' -z 117834 ']' 00:13:37.957 11:57:43 -- common/autotest_common.sh@940 -- # kill -0 117834 00:13:37.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (117834) - No such process 00:13:37.957 11:57:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 117834 is not found' 00:13:37.957 11:57:43 -- event/cpu_locks.sh@16 -- # [[ -z 117857 ]] 00:13:37.957 11:57:43 -- event/cpu_locks.sh@16 -- # killprocess 117857 00:13:37.957 11:57:43 -- common/autotest_common.sh@936 -- # '[' -z 117857 ']' 00:13:37.957 11:57:43 -- common/autotest_common.sh@940 -- # kill -0 117857 00:13:37.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (117857) - No such process 00:13:37.957 11:57:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 117857 is not found' 00:13:37.957 11:57:43 -- event/cpu_locks.sh@18 -- # rm -f 00:13:37.957 ************************************ 00:13:37.957 END TEST cpu_locks 00:13:37.957 ************************************ 00:13:37.957 00:13:37.957 real 0m20.495s 00:13:37.957 user 0m35.576s 00:13:37.957 sys 0m5.675s 00:13:37.957 11:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:37.957 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.957 ************************************ 00:13:37.957 END TEST event 00:13:37.957 ************************************ 00:13:37.957 00:13:37.957 real 0m48.700s 00:13:37.957 user 1m33.842s 00:13:37.957 sys 0m9.273s 00:13:37.957 11:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:37.957 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.957 11:57:43 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:37.957 11:57:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:37.957 11:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.957 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.957 ************************************ 00:13:37.957 START TEST thread 00:13:37.957 ************************************ 00:13:37.957 11:57:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:37.957 * Looking for test storage... 00:13:37.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:37.957 11:57:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:37.957 11:57:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:37.957 11:57:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:38.263 11:57:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:38.263 11:57:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:38.263 11:57:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:38.263 11:57:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:38.263 11:57:43 -- scripts/common.sh@335 -- # IFS=.-: 00:13:38.263 11:57:43 -- scripts/common.sh@335 -- # read -ra ver1 00:13:38.263 11:57:43 -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.263 11:57:43 -- scripts/common.sh@336 -- # read -ra ver2 00:13:38.263 11:57:43 -- scripts/common.sh@337 -- # local 'op=<' 00:13:38.263 11:57:43 -- scripts/common.sh@339 -- # ver1_l=2 00:13:38.263 11:57:43 -- scripts/common.sh@340 -- # ver2_l=1 00:13:38.263 11:57:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:38.263 11:57:43 -- scripts/common.sh@343 -- # case "$op" in 00:13:38.263 11:57:43 -- scripts/common.sh@344 -- # : 1 00:13:38.263 11:57:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:38.263 11:57:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.263 11:57:43 -- scripts/common.sh@364 -- # decimal 1 00:13:38.263 11:57:43 -- scripts/common.sh@352 -- # local d=1 00:13:38.263 11:57:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.263 11:57:43 -- scripts/common.sh@354 -- # echo 1 00:13:38.263 11:57:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:38.263 11:57:43 -- scripts/common.sh@365 -- # decimal 2 00:13:38.263 11:57:43 -- scripts/common.sh@352 -- # local d=2 00:13:38.263 11:57:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.263 11:57:43 -- scripts/common.sh@354 -- # echo 2 00:13:38.263 11:57:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:38.263 11:57:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:38.263 11:57:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:38.263 11:57:43 -- scripts/common.sh@367 -- # return 0 00:13:38.263 11:57:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.263 11:57:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.263 --rc genhtml_branch_coverage=1 00:13:38.263 --rc genhtml_function_coverage=1 00:13:38.263 --rc genhtml_legend=1 00:13:38.263 --rc geninfo_all_blocks=1 00:13:38.263 --rc geninfo_unexecuted_blocks=1 00:13:38.263 00:13:38.263 ' 00:13:38.263 11:57:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.263 --rc genhtml_branch_coverage=1 00:13:38.263 --rc genhtml_function_coverage=1 00:13:38.263 --rc genhtml_legend=1 00:13:38.263 --rc geninfo_all_blocks=1 00:13:38.263 --rc geninfo_unexecuted_blocks=1 00:13:38.263 00:13:38.263 ' 00:13:38.263 11:57:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.263 --rc genhtml_branch_coverage=1 00:13:38.263 --rc genhtml_function_coverage=1 00:13:38.263 --rc genhtml_legend=1 00:13:38.263 --rc geninfo_all_blocks=1 00:13:38.263 --rc geninfo_unexecuted_blocks=1 00:13:38.263 00:13:38.263 ' 00:13:38.263 11:57:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.263 --rc genhtml_branch_coverage=1 00:13:38.263 --rc genhtml_function_coverage=1 00:13:38.263 --rc genhtml_legend=1 00:13:38.263 --rc geninfo_all_blocks=1 00:13:38.263 --rc geninfo_unexecuted_blocks=1 00:13:38.263 00:13:38.263 ' 00:13:38.263 11:57:43 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:38.263 11:57:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:38.263 11:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.263 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:13:38.263 ************************************ 00:13:38.263 START TEST thread_poller_perf 00:13:38.263 ************************************ 00:13:38.263 11:57:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:38.263 [2024-11-29 11:57:43.519707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:38.263 [2024-11-29 11:57:43.520117] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117999 ] 00:13:38.263 [2024-11-29 11:57:43.669007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.538 [2024-11-29 11:57:43.753049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.538 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:39.472 [2024-11-29T11:57:44.983Z] ====================================== 00:13:39.472 [2024-11-29T11:57:44.983Z] busy:2213045356 (cyc) 00:13:39.472 [2024-11-29T11:57:44.983Z] total_run_count: 294000 00:13:39.472 [2024-11-29T11:57:44.983Z] tsc_hz: 2200000000 (cyc) 00:13:39.472 [2024-11-29T11:57:44.983Z] ====================================== 00:13:39.472 [2024-11-29T11:57:44.983Z] poller_cost: 7527 (cyc), 3421 (nsec) 00:13:39.472 00:13:39.472 real 0m1.379s 00:13:39.472 user 0m1.189s 00:13:39.472 sys 0m0.089s 00:13:39.472 ************************************ 00:13:39.472 END TEST thread_poller_perf 00:13:39.472 ************************************ 00:13:39.472 11:57:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.472 11:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:39.472 11:57:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:39.472 11:57:44 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:39.472 11:57:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.472 11:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:39.472 ************************************ 00:13:39.472 START TEST thread_poller_perf 00:13:39.472 ************************************ 00:13:39.472 11:57:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:39.472 [2024-11-29 11:57:44.945655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:39.472 [2024-11-29 11:57:44.946083] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118042 ] 00:13:39.730 [2024-11-29 11:57:45.096999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.730 [2024-11-29 11:57:45.191316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.730 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:41.101 [2024-11-29T11:57:46.612Z] ====================================== 00:13:41.101 [2024-11-29T11:57:46.612Z] busy:2206147129 (cyc) 00:13:41.101 [2024-11-29T11:57:46.612Z] total_run_count: 3742000 00:13:41.101 [2024-11-29T11:57:46.612Z] tsc_hz: 2200000000 (cyc) 00:13:41.101 [2024-11-29T11:57:46.612Z] ====================================== 00:13:41.101 [2024-11-29T11:57:46.612Z] poller_cost: 589 (cyc), 267 (nsec) 00:13:41.101 ************************************ 00:13:41.101 END TEST thread_poller_perf 00:13:41.101 ************************************ 00:13:41.101 00:13:41.101 real 0m1.376s 00:13:41.101 user 0m1.191s 00:13:41.101 sys 0m0.084s 00:13:41.101 11:57:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.101 11:57:46 -- common/autotest_common.sh@10 -- # set +x 00:13:41.101 11:57:46 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:13:41.101 11:57:46 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:13:41.101 11:57:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.101 11:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.101 11:57:46 -- common/autotest_common.sh@10 -- # set +x 00:13:41.101 ************************************ 00:13:41.101 START TEST thread_spdk_lock 00:13:41.101 ************************************ 00:13:41.101 11:57:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:13:41.101 [2024-11-29 11:57:46.378844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:41.101 [2024-11-29 11:57:46.379371] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118085 ] 00:13:41.101 [2024-11-29 11:57:46.532894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:41.101 [2024-11-29 11:57:46.612105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.358 [2024-11-29 11:57:46.612111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.922 [2024-11-29 11:57:47.141835] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:41.922 [2024-11-29 11:57:47.142234] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:13:41.922 [2024-11-29 11:57:47.142465] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x55c12f944980 00:13:41.922 [2024-11-29 11:57:47.143964] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:41.922 [2024-11-29 11:57:47.144205] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:41.922 [2024-11-29 11:57:47.144380] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:41.922 Starting test contend 00:13:41.922 Worker Delay Wait us Hold us Total us 00:13:41.922 0 3 125949 196350 322299 00:13:41.922 1 5 60183 299949 360133 00:13:41.922 PASS test contend 00:13:41.922 Starting test hold_by_poller 00:13:41.922 PASS test hold_by_poller 00:13:41.922 Starting test hold_by_message 00:13:41.922 PASS test hold_by_message 00:13:41.922 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:13:41.922 100014 assertions passed 00:13:41.922 0 assertions failed 00:13:41.922 ************************************ 00:13:41.922 END TEST thread_spdk_lock 00:13:41.922 ************************************ 00:13:41.922 00:13:41.922 real 0m0.915s 00:13:41.922 user 0m1.250s 00:13:41.922 sys 0m0.096s 00:13:41.922 11:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.922 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:13:41.922 ************************************ 00:13:41.922 END TEST thread 00:13:41.922 ************************************ 00:13:41.922 00:13:41.922 real 0m3.980s 00:13:41.922 user 0m3.818s 00:13:41.922 sys 0m0.392s 00:13:41.922 11:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.922 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:13:41.922 11:57:47 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:41.922 11:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.922 11:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.922 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:13:41.923 ************************************ 00:13:41.923 START TEST accel 00:13:41.923 ************************************ 00:13:41.923 11:57:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:41.923 * Looking for test storage... 00:13:41.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:41.923 11:57:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:41.923 11:57:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:41.923 11:57:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:42.181 11:57:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:42.181 11:57:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:42.181 11:57:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:42.181 11:57:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:42.181 11:57:47 -- scripts/common.sh@335 -- # IFS=.-: 00:13:42.181 11:57:47 -- scripts/common.sh@335 -- # read -ra ver1 00:13:42.181 11:57:47 -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.181 11:57:47 -- scripts/common.sh@336 -- # read -ra ver2 00:13:42.181 11:57:47 -- scripts/common.sh@337 -- # local 'op=<' 00:13:42.181 11:57:47 -- scripts/common.sh@339 -- # ver1_l=2 00:13:42.181 11:57:47 -- scripts/common.sh@340 -- # ver2_l=1 00:13:42.181 11:57:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:42.181 11:57:47 -- scripts/common.sh@343 -- # case "$op" in 00:13:42.181 11:57:47 -- scripts/common.sh@344 -- # : 1 00:13:42.181 11:57:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:42.181 11:57:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.181 11:57:47 -- scripts/common.sh@364 -- # decimal 1 00:13:42.181 11:57:47 -- scripts/common.sh@352 -- # local d=1 00:13:42.181 11:57:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.181 11:57:47 -- scripts/common.sh@354 -- # echo 1 00:13:42.181 11:57:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:42.181 11:57:47 -- scripts/common.sh@365 -- # decimal 2 00:13:42.181 11:57:47 -- scripts/common.sh@352 -- # local d=2 00:13:42.181 11:57:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.181 11:57:47 -- scripts/common.sh@354 -- # echo 2 00:13:42.181 11:57:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:42.181 11:57:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:42.181 11:57:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:42.181 11:57:47 -- scripts/common.sh@367 -- # return 0 00:13:42.181 11:57:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.181 11:57:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:42.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.181 --rc genhtml_branch_coverage=1 00:13:42.181 --rc genhtml_function_coverage=1 00:13:42.181 --rc genhtml_legend=1 00:13:42.181 --rc geninfo_all_blocks=1 00:13:42.181 --rc geninfo_unexecuted_blocks=1 00:13:42.181 00:13:42.181 ' 00:13:42.181 11:57:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.182 --rc genhtml_branch_coverage=1 00:13:42.182 --rc genhtml_function_coverage=1 00:13:42.182 --rc genhtml_legend=1 00:13:42.182 --rc geninfo_all_blocks=1 00:13:42.182 --rc geninfo_unexecuted_blocks=1 00:13:42.182 00:13:42.182 ' 00:13:42.182 11:57:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.182 --rc genhtml_branch_coverage=1 00:13:42.182 --rc genhtml_function_coverage=1 00:13:42.182 --rc genhtml_legend=1 00:13:42.182 --rc geninfo_all_blocks=1 00:13:42.182 --rc geninfo_unexecuted_blocks=1 00:13:42.182 00:13:42.182 ' 00:13:42.182 11:57:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.182 --rc genhtml_branch_coverage=1 00:13:42.182 --rc genhtml_function_coverage=1 00:13:42.182 --rc genhtml_legend=1 00:13:42.182 --rc geninfo_all_blocks=1 00:13:42.182 --rc geninfo_unexecuted_blocks=1 00:13:42.182 00:13:42.182 ' 00:13:42.182 11:57:47 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:13:42.182 11:57:47 -- accel/accel.sh@74 -- # get_expected_opcs 00:13:42.182 11:57:47 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:42.182 11:57:47 -- accel/accel.sh@59 -- # spdk_tgt_pid=118171 00:13:42.182 11:57:47 -- accel/accel.sh@60 -- # waitforlisten 118171 00:13:42.182 11:57:47 -- common/autotest_common.sh@829 -- # '[' -z 118171 ']' 00:13:42.182 11:57:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.182 11:57:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.182 11:57:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.182 11:57:47 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:42.182 11:57:47 -- accel/accel.sh@58 -- # build_accel_config 00:13:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.182 11:57:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:42.182 11:57:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.182 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:13:42.182 11:57:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:42.182 11:57:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:42.182 11:57:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:42.182 11:57:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:42.182 11:57:47 -- accel/accel.sh@41 -- # local IFS=, 00:13:42.182 11:57:47 -- accel/accel.sh@42 -- # jq -r . 00:13:42.182 [2024-11-29 11:57:47.561704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:42.182 [2024-11-29 11:57:47.562224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118171 ] 00:13:42.440 [2024-11-29 11:57:47.708527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.440 [2024-11-29 11:57:47.790305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.440 [2024-11-29 11:57:47.790816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.372 11:57:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.372 11:57:48 -- common/autotest_common.sh@862 -- # return 0 00:13:43.372 11:57:48 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:43.372 11:57:48 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:13:43.372 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.372 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:13:43.372 11:57:48 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:43.372 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # IFS== 00:13:43.372 11:57:48 -- accel/accel.sh@64 -- # read -r opc module 00:13:43.372 11:57:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:43.372 11:57:48 -- accel/accel.sh@67 -- # killprocess 118171 00:13:43.372 11:57:48 -- common/autotest_common.sh@936 -- # '[' -z 118171 ']' 00:13:43.372 11:57:48 -- common/autotest_common.sh@940 -- # kill -0 118171 00:13:43.372 11:57:48 -- common/autotest_common.sh@941 -- # uname 00:13:43.372 11:57:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.372 11:57:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118171 00:13:43.372 11:57:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:43.372 11:57:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:43.372 11:57:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118171' 00:13:43.372 killing process with pid 118171 00:13:43.372 11:57:48 -- common/autotest_common.sh@955 -- # kill 118171 00:13:43.372 11:57:48 -- common/autotest_common.sh@960 -- # wait 118171 00:13:43.630 11:57:49 -- accel/accel.sh@68 -- # trap - ERR 00:13:43.630 11:57:49 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:13:43.630 11:57:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.630 11:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.630 11:57:49 -- common/autotest_common.sh@10 -- # set +x 00:13:43.630 11:57:49 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:13:43.630 11:57:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:43.630 11:57:49 -- accel/accel.sh@12 -- # build_accel_config 00:13:43.630 11:57:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:43.630 11:57:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:43.630 11:57:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:43.630 11:57:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:43.630 11:57:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:43.630 11:57:49 -- accel/accel.sh@41 -- # local IFS=, 00:13:43.630 11:57:49 -- accel/accel.sh@42 -- # jq -r . 00:13:43.887 11:57:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.887 11:57:49 -- common/autotest_common.sh@10 -- # set +x 00:13:43.887 11:57:49 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:43.887 11:57:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:43.887 11:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.887 11:57:49 -- common/autotest_common.sh@10 -- # set +x 00:13:43.887 ************************************ 00:13:43.887 START TEST accel_missing_filename 00:13:43.887 ************************************ 00:13:43.887 11:57:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:13:43.887 11:57:49 -- common/autotest_common.sh@650 -- # local es=0 00:13:43.887 11:57:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:43.887 11:57:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:13:43.887 11:57:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.887 11:57:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:13:43.887 11:57:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.887 11:57:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:13:43.887 11:57:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:43.887 11:57:49 -- accel/accel.sh@12 -- # build_accel_config 00:13:43.887 11:57:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:43.887 11:57:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:43.887 11:57:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:43.887 11:57:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:43.887 11:57:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:43.887 11:57:49 -- accel/accel.sh@41 -- # local IFS=, 00:13:43.887 11:57:49 -- accel/accel.sh@42 -- # jq -r . 00:13:43.887 [2024-11-29 11:57:49.262169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:43.888 [2024-11-29 11:57:49.262699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118236 ] 00:13:44.145 [2024-11-29 11:57:49.426667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.145 [2024-11-29 11:57:49.509060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.145 [2024-11-29 11:57:49.566395] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:44.145 [2024-11-29 11:57:49.652824] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:13:44.402 A filename is required. 00:13:44.403 ************************************ 00:13:44.403 END TEST accel_missing_filename 00:13:44.403 ************************************ 00:13:44.403 11:57:49 -- common/autotest_common.sh@653 -- # es=234 00:13:44.403 11:57:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.403 11:57:49 -- common/autotest_common.sh@662 -- # es=106 00:13:44.403 11:57:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:44.403 11:57:49 -- common/autotest_common.sh@670 -- # es=1 00:13:44.403 11:57:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.403 00:13:44.403 real 0m0.533s 00:13:44.403 user 0m0.335s 00:13:44.403 sys 0m0.147s 00:13:44.403 11:57:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:44.403 11:57:49 -- common/autotest_common.sh@10 -- # set +x 00:13:44.403 11:57:49 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:44.403 11:57:49 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:13:44.403 11:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.403 11:57:49 -- common/autotest_common.sh@10 -- # set +x 00:13:44.403 ************************************ 00:13:44.403 START TEST accel_compress_verify 00:13:44.403 ************************************ 00:13:44.403 11:57:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:44.403 11:57:49 -- common/autotest_common.sh@650 -- # local es=0 00:13:44.403 11:57:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:44.403 11:57:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:13:44.403 11:57:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.403 11:57:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:13:44.403 11:57:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.403 11:57:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:44.403 11:57:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:44.403 11:57:49 -- accel/accel.sh@12 -- # build_accel_config 00:13:44.403 11:57:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:44.403 11:57:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:44.403 11:57:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:44.403 11:57:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:44.403 11:57:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:44.403 11:57:49 -- accel/accel.sh@41 -- # local IFS=, 00:13:44.403 11:57:49 -- accel/accel.sh@42 -- # jq -r . 00:13:44.403 [2024-11-29 11:57:49.843998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:44.403 [2024-11-29 11:57:49.844477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118266 ] 00:13:44.661 [2024-11-29 11:57:50.007648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.661 [2024-11-29 11:57:50.098493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.661 [2024-11-29 11:57:50.156955] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:44.920 [2024-11-29 11:57:50.242144] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:13:44.920 00:13:44.920 Compression does not support the verify option, aborting. 00:13:44.920 ************************************ 00:13:44.920 END TEST accel_compress_verify 00:13:44.920 ************************************ 00:13:44.920 11:57:50 -- common/autotest_common.sh@653 -- # es=161 00:13:44.920 11:57:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.920 11:57:50 -- common/autotest_common.sh@662 -- # es=33 00:13:44.920 11:57:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:44.920 11:57:50 -- common/autotest_common.sh@670 -- # es=1 00:13:44.920 11:57:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.920 00:13:44.920 real 0m0.535s 00:13:44.920 user 0m0.317s 00:13:44.920 sys 0m0.159s 00:13:44.920 11:57:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:44.920 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:44.920 11:57:50 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:44.920 11:57:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:44.920 11:57:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.920 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:44.920 ************************************ 00:13:44.920 START TEST accel_wrong_workload 00:13:44.920 ************************************ 00:13:44.920 11:57:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:13:44.920 11:57:50 -- common/autotest_common.sh@650 -- # local es=0 00:13:44.920 11:57:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:44.920 11:57:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:13:44.920 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.920 11:57:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:13:44.920 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.920 11:57:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:13:44.920 11:57:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:44.920 11:57:50 -- accel/accel.sh@12 -- # build_accel_config 00:13:44.920 11:57:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:44.920 11:57:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:44.920 11:57:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:44.920 11:57:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:44.920 11:57:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:44.920 11:57:50 -- accel/accel.sh@41 -- # local IFS=, 00:13:44.920 11:57:50 -- accel/accel.sh@42 -- # jq -r . 00:13:44.920 Unsupported workload type: foobar 00:13:44.920 [2024-11-29 11:57:50.428781] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:45.179 accel_perf options: 00:13:45.179 [-h help message] 00:13:45.179 [-q queue depth per core] 00:13:45.179 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:45.179 [-T number of threads per core 00:13:45.179 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:45.179 [-t time in seconds] 00:13:45.179 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:45.179 [ dif_verify, , dif_generate, dif_generate_copy 00:13:45.179 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:45.179 [-l for compress/decompress workloads, name of uncompressed input file 00:13:45.179 [-S for crc32c workload, use this seed value (default 0) 00:13:45.179 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:45.180 [-f for fill workload, use this BYTE value (default 255) 00:13:45.180 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:45.180 [-y verify result if this switch is on] 00:13:45.180 [-a tasks to allocate per core (default: same value as -q)] 00:13:45.180 Can be used to spread operations across a wider range of memory. 00:13:45.180 11:57:50 -- common/autotest_common.sh@653 -- # es=1 00:13:45.180 11:57:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.180 11:57:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.180 11:57:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.180 00:13:45.180 real 0m0.053s 00:13:45.180 user 0m0.018s 00:13:45.180 sys 0m0.032s 00:13:45.180 11:57:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.180 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:45.180 ************************************ 00:13:45.180 END TEST accel_wrong_workload 00:13:45.180 ************************************ 00:13:45.180 11:57:50 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:45.180 11:57:50 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:13:45.180 11:57:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.180 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:45.180 ************************************ 00:13:45.180 START TEST accel_negative_buffers 00:13:45.180 ************************************ 00:13:45.180 11:57:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:45.180 11:57:50 -- common/autotest_common.sh@650 -- # local es=0 00:13:45.180 11:57:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:45.180 11:57:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:13:45.180 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.180 11:57:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:13:45.180 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.180 11:57:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:13:45.180 11:57:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:45.180 11:57:50 -- accel/accel.sh@12 -- # build_accel_config 00:13:45.180 11:57:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:45.180 11:57:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:45.180 11:57:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:45.180 11:57:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:45.180 11:57:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:45.180 11:57:50 -- accel/accel.sh@41 -- # local IFS=, 00:13:45.180 11:57:50 -- accel/accel.sh@42 -- # jq -r . 00:13:45.180 -x option must be non-negative. 00:13:45.180 [2024-11-29 11:57:50.529585] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:45.180 accel_perf options: 00:13:45.180 [-h help message] 00:13:45.180 [-q queue depth per core] 00:13:45.180 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:45.180 [-T number of threads per core 00:13:45.180 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:45.180 [-t time in seconds] 00:13:45.180 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:45.180 [ dif_verify, , dif_generate, dif_generate_copy 00:13:45.180 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:45.180 [-l for compress/decompress workloads, name of uncompressed input file 00:13:45.180 [-S for crc32c workload, use this seed value (default 0) 00:13:45.180 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:45.180 [-f for fill workload, use this BYTE value (default 255) 00:13:45.180 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:45.180 [-y verify result if this switch is on] 00:13:45.180 [-a tasks to allocate per core (default: same value as -q)] 00:13:45.180 Can be used to spread operations across a wider range of memory. 00:13:45.180 11:57:50 -- common/autotest_common.sh@653 -- # es=1 00:13:45.180 11:57:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.180 11:57:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.180 11:57:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.180 00:13:45.180 real 0m0.053s 00:13:45.180 user 0m0.067s 00:13:45.180 sys 0m0.032s 00:13:45.180 11:57:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.180 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:45.180 ************************************ 00:13:45.180 END TEST accel_negative_buffers 00:13:45.180 ************************************ 00:13:45.180 11:57:50 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:45.180 11:57:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:13:45.180 11:57:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.180 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:45.180 ************************************ 00:13:45.180 START TEST accel_crc32c 00:13:45.180 ************************************ 00:13:45.180 11:57:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:45.180 11:57:50 -- accel/accel.sh@16 -- # local accel_opc 00:13:45.180 11:57:50 -- accel/accel.sh@17 -- # local accel_module 00:13:45.180 11:57:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:45.180 11:57:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:45.180 11:57:50 -- accel/accel.sh@12 -- # build_accel_config 00:13:45.180 11:57:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:45.180 11:57:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:45.180 11:57:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:45.180 11:57:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:45.180 11:57:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:45.180 11:57:50 -- accel/accel.sh@41 -- # local IFS=, 00:13:45.180 11:57:50 -- accel/accel.sh@42 -- # jq -r . 00:13:45.180 [2024-11-29 11:57:50.626371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:45.180 [2024-11-29 11:57:50.626573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118347 ] 00:13:45.438 [2024-11-29 11:57:50.769698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.438 [2024-11-29 11:57:50.848841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.811 11:57:52 -- accel/accel.sh@18 -- # out=' 00:13:46.811 SPDK Configuration: 00:13:46.811 Core mask: 0x1 00:13:46.811 00:13:46.811 Accel Perf Configuration: 00:13:46.811 Workload Type: crc32c 00:13:46.811 CRC-32C seed: 32 00:13:46.811 Transfer size: 4096 bytes 00:13:46.811 Vector count 1 00:13:46.811 Module: software 00:13:46.811 Queue depth: 32 00:13:46.811 Allocate depth: 32 00:13:46.811 # threads/core: 1 00:13:46.811 Run time: 1 seconds 00:13:46.811 Verify: Yes 00:13:46.811 00:13:46.811 Running for 1 seconds... 00:13:46.811 00:13:46.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:46.811 ------------------------------------------------------------------------------------ 00:13:46.811 0,0 418688/s 1635 MiB/s 0 0 00:13:46.811 ==================================================================================== 00:13:46.812 Total 418688/s 1635 MiB/s 0 0' 00:13:46.812 11:57:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:46.812 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:46.812 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:46.812 11:57:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:46.812 11:57:52 -- accel/accel.sh@12 -- # build_accel_config 00:13:46.812 11:57:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:46.812 11:57:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:46.812 11:57:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:46.812 11:57:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:46.812 11:57:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:46.812 11:57:52 -- accel/accel.sh@41 -- # local IFS=, 00:13:46.812 11:57:52 -- accel/accel.sh@42 -- # jq -r . 00:13:46.812 [2024-11-29 11:57:52.121663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:46.812 [2024-11-29 11:57:52.121914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118370 ] 00:13:46.812 [2024-11-29 11:57:52.270116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.069 [2024-11-29 11:57:52.371049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=0x1 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=crc32c 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=32 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=software 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@23 -- # accel_module=software 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=32 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=32 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.069 11:57:52 -- accel/accel.sh@21 -- # val=1 00:13:47.069 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.069 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.070 11:57:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:47.070 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.070 11:57:52 -- accel/accel.sh@21 -- # val=Yes 00:13:47.070 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.070 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.070 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:47.070 11:57:52 -- accel/accel.sh@21 -- # val= 00:13:47.070 11:57:52 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # IFS=: 00:13:47.070 11:57:52 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 11:57:53 -- accel/accel.sh@21 -- # val= 00:13:48.541 11:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # IFS=: 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 11:57:53 -- accel/accel.sh@21 -- # val= 00:13:48.541 11:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # IFS=: 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 11:57:53 -- accel/accel.sh@21 -- # val= 00:13:48.541 11:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # IFS=: 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 11:57:53 -- accel/accel.sh@21 -- # val= 00:13:48.541 11:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # IFS=: 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 11:57:53 -- accel/accel.sh@21 -- # val= 00:13:48.541 11:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # IFS=: 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 11:57:53 -- accel/accel.sh@21 -- # val= 00:13:48.541 11:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # IFS=: 00:13:48.541 11:57:53 -- accel/accel.sh@20 -- # read -r var val 00:13:48.541 ************************************ 00:13:48.541 END TEST accel_crc32c 00:13:48.541 ************************************ 00:13:48.541 11:57:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:48.541 11:57:53 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:13:48.541 11:57:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:48.541 00:13:48.541 real 0m3.030s 00:13:48.541 user 0m2.540s 00:13:48.541 sys 0m0.316s 00:13:48.541 11:57:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:48.541 11:57:53 -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 11:57:53 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:48.541 11:57:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:13:48.541 11:57:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.541 11:57:53 -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 ************************************ 00:13:48.541 START TEST accel_crc32c_C2 00:13:48.541 ************************************ 00:13:48.541 11:57:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:48.541 11:57:53 -- accel/accel.sh@16 -- # local accel_opc 00:13:48.541 11:57:53 -- accel/accel.sh@17 -- # local accel_module 00:13:48.541 11:57:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:48.541 11:57:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:48.541 11:57:53 -- accel/accel.sh@12 -- # build_accel_config 00:13:48.541 11:57:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:48.541 11:57:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:48.541 11:57:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:48.541 11:57:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:48.541 11:57:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:48.541 11:57:53 -- accel/accel.sh@41 -- # local IFS=, 00:13:48.541 11:57:53 -- accel/accel.sh@42 -- # jq -r . 00:13:48.541 [2024-11-29 11:57:53.712922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:48.541 [2024-11-29 11:57:53.713320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118417 ] 00:13:48.541 [2024-11-29 11:57:53.862561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.541 [2024-11-29 11:57:53.943940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.914 11:57:55 -- accel/accel.sh@18 -- # out=' 00:13:49.914 SPDK Configuration: 00:13:49.914 Core mask: 0x1 00:13:49.914 00:13:49.914 Accel Perf Configuration: 00:13:49.914 Workload Type: crc32c 00:13:49.914 CRC-32C seed: 0 00:13:49.914 Transfer size: 4096 bytes 00:13:49.914 Vector count 2 00:13:49.914 Module: software 00:13:49.914 Queue depth: 32 00:13:49.914 Allocate depth: 32 00:13:49.914 # threads/core: 1 00:13:49.914 Run time: 1 seconds 00:13:49.914 Verify: Yes 00:13:49.914 00:13:49.914 Running for 1 seconds... 00:13:49.914 00:13:49.914 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:49.914 ------------------------------------------------------------------------------------ 00:13:49.914 0,0 327744/s 2560 MiB/s 0 0 00:13:49.914 ==================================================================================== 00:13:49.914 Total 327744/s 1280 MiB/s 0 0' 00:13:49.914 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:49.914 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:49.914 11:57:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:49.914 11:57:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:49.914 11:57:55 -- accel/accel.sh@12 -- # build_accel_config 00:13:49.914 11:57:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:49.914 11:57:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:49.914 11:57:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:49.914 11:57:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:49.914 11:57:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:49.914 11:57:55 -- accel/accel.sh@41 -- # local IFS=, 00:13:49.914 11:57:55 -- accel/accel.sh@42 -- # jq -r . 00:13:49.914 [2024-11-29 11:57:55.207776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:49.914 [2024-11-29 11:57:55.208581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118440 ] 00:13:49.914 [2024-11-29 11:57:55.358004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.172 [2024-11-29 11:57:55.453481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=0x1 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=crc32c 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=0 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=software 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@23 -- # accel_module=software 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=32 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=32 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=1 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val=Yes 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:50.172 11:57:55 -- accel/accel.sh@21 -- # val= 00:13:50.172 11:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # IFS=: 00:13:50.172 11:57:55 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@21 -- # val= 00:13:51.545 11:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # IFS=: 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@21 -- # val= 00:13:51.545 11:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # IFS=: 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@21 -- # val= 00:13:51.545 11:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # IFS=: 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@21 -- # val= 00:13:51.545 11:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # IFS=: 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@21 -- # val= 00:13:51.545 11:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # IFS=: 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@21 -- # val= 00:13:51.545 11:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # IFS=: 00:13:51.545 11:57:56 -- accel/accel.sh@20 -- # read -r var val 00:13:51.545 11:57:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:51.545 11:57:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:13:51.545 11:57:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:51.545 00:13:51.545 real 0m3.042s 00:13:51.545 user 0m2.561s 00:13:51.545 sys 0m0.307s 00:13:51.545 11:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:51.545 11:57:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.545 ************************************ 00:13:51.545 END TEST accel_crc32c_C2 00:13:51.545 ************************************ 00:13:51.545 11:57:56 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:51.545 11:57:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:51.545 11:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.545 11:57:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.545 ************************************ 00:13:51.545 START TEST accel_copy 00:13:51.545 ************************************ 00:13:51.545 11:57:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:13:51.545 11:57:56 -- accel/accel.sh@16 -- # local accel_opc 00:13:51.545 11:57:56 -- accel/accel.sh@17 -- # local accel_module 00:13:51.545 11:57:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:13:51.545 11:57:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:51.545 11:57:56 -- accel/accel.sh@12 -- # build_accel_config 00:13:51.545 11:57:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:51.545 11:57:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:51.545 11:57:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:51.545 11:57:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:51.545 11:57:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:51.545 11:57:56 -- accel/accel.sh@41 -- # local IFS=, 00:13:51.545 11:57:56 -- accel/accel.sh@42 -- # jq -r . 00:13:51.545 [2024-11-29 11:57:56.805069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:51.546 [2024-11-29 11:57:56.805495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118485 ] 00:13:51.546 [2024-11-29 11:57:56.956860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.546 [2024-11-29 11:57:57.036307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.920 11:57:58 -- accel/accel.sh@18 -- # out=' 00:13:52.920 SPDK Configuration: 00:13:52.920 Core mask: 0x1 00:13:52.920 00:13:52.920 Accel Perf Configuration: 00:13:52.920 Workload Type: copy 00:13:52.920 Transfer size: 4096 bytes 00:13:52.920 Vector count 1 00:13:52.920 Module: software 00:13:52.920 Queue depth: 32 00:13:52.920 Allocate depth: 32 00:13:52.920 # threads/core: 1 00:13:52.920 Run time: 1 seconds 00:13:52.920 Verify: Yes 00:13:52.920 00:13:52.920 Running for 1 seconds... 00:13:52.920 00:13:52.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:52.920 ------------------------------------------------------------------------------------ 00:13:52.920 0,0 250464/s 978 MiB/s 0 0 00:13:52.920 ==================================================================================== 00:13:52.920 Total 250464/s 978 MiB/s 0 0' 00:13:52.920 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:52.920 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:52.920 11:57:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:52.920 11:57:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:52.920 11:57:58 -- accel/accel.sh@12 -- # build_accel_config 00:13:52.920 11:57:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:52.920 11:57:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:52.920 11:57:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:52.920 11:57:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:52.920 11:57:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:52.920 11:57:58 -- accel/accel.sh@41 -- # local IFS=, 00:13:52.920 11:57:58 -- accel/accel.sh@42 -- # jq -r . 00:13:52.920 [2024-11-29 11:57:58.315057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:52.920 [2024-11-29 11:57:58.315962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118508 ] 00:13:53.178 [2024-11-29 11:57:58.465386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.178 [2024-11-29 11:57:58.564270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val=0x1 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val=copy 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@24 -- # accel_opc=copy 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val=software 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@23 -- # accel_module=software 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val=32 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val=32 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val=1 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.178 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.178 11:57:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:53.178 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.179 11:57:58 -- accel/accel.sh@21 -- # val=Yes 00:13:53.179 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.179 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.179 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:53.179 11:57:58 -- accel/accel.sh@21 -- # val= 00:13:53.179 11:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # IFS=: 00:13:53.179 11:57:58 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@21 -- # val= 00:13:54.555 11:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # IFS=: 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@21 -- # val= 00:13:54.555 11:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # IFS=: 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@21 -- # val= 00:13:54.555 11:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # IFS=: 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@21 -- # val= 00:13:54.555 11:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # IFS=: 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@21 -- # val= 00:13:54.555 11:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # IFS=: 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@21 -- # val= 00:13:54.555 11:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # IFS=: 00:13:54.555 11:57:59 -- accel/accel.sh@20 -- # read -r var val 00:13:54.555 11:57:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:54.555 11:57:59 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:13:54.555 11:57:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:54.555 00:13:54.555 real 0m3.049s 00:13:54.555 user 0m2.594s 00:13:54.555 sys 0m0.296s 00:13:54.555 11:57:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:54.555 11:57:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.555 ************************************ 00:13:54.555 END TEST accel_copy 00:13:54.555 ************************************ 00:13:54.555 11:57:59 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:54.555 11:57:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:54.555 11:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.555 11:57:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.555 ************************************ 00:13:54.555 START TEST accel_fill 00:13:54.555 ************************************ 00:13:54.555 11:57:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:54.555 11:57:59 -- accel/accel.sh@16 -- # local accel_opc 00:13:54.555 11:57:59 -- accel/accel.sh@17 -- # local accel_module 00:13:54.555 11:57:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:54.555 11:57:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:54.555 11:57:59 -- accel/accel.sh@12 -- # build_accel_config 00:13:54.555 11:57:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:54.555 11:57:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:54.555 11:57:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:54.555 11:57:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:54.555 11:57:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:54.555 11:57:59 -- accel/accel.sh@41 -- # local IFS=, 00:13:54.555 11:57:59 -- accel/accel.sh@42 -- # jq -r . 00:13:54.555 [2024-11-29 11:57:59.903380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:54.555 [2024-11-29 11:57:59.904321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118554 ] 00:13:54.555 [2024-11-29 11:58:00.058901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.813 [2024-11-29 11:58:00.135614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.187 11:58:01 -- accel/accel.sh@18 -- # out=' 00:13:56.187 SPDK Configuration: 00:13:56.187 Core mask: 0x1 00:13:56.187 00:13:56.187 Accel Perf Configuration: 00:13:56.187 Workload Type: fill 00:13:56.187 Fill pattern: 0x80 00:13:56.187 Transfer size: 4096 bytes 00:13:56.187 Vector count 1 00:13:56.187 Module: software 00:13:56.187 Queue depth: 64 00:13:56.187 Allocate depth: 64 00:13:56.187 # threads/core: 1 00:13:56.187 Run time: 1 seconds 00:13:56.187 Verify: Yes 00:13:56.187 00:13:56.187 Running for 1 seconds... 00:13:56.187 00:13:56.187 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:56.187 ------------------------------------------------------------------------------------ 00:13:56.187 0,0 378944/s 1480 MiB/s 0 0 00:13:56.187 ==================================================================================== 00:13:56.187 Total 378944/s 1480 MiB/s 0 0' 00:13:56.187 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.187 11:58:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:56.187 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.187 11:58:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:56.187 11:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:13:56.187 11:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:56.187 11:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:56.187 11:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:56.187 11:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:56.187 11:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:56.187 11:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:13:56.187 11:58:01 -- accel/accel.sh@42 -- # jq -r . 00:13:56.187 [2024-11-29 11:58:01.415469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:56.187 [2024-11-29 11:58:01.415989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118576 ] 00:13:56.187 [2024-11-29 11:58:01.574098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.187 [2024-11-29 11:58:01.659027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.444 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=0x1 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=fill 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@24 -- # accel_opc=fill 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=0x80 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=software 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@23 -- # accel_module=software 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=64 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=64 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=1 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val=Yes 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:56.445 11:58:01 -- accel/accel.sh@21 -- # val= 00:13:56.445 11:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # IFS=: 00:13:56.445 11:58:01 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@21 -- # val= 00:13:57.818 11:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # IFS=: 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@21 -- # val= 00:13:57.818 11:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # IFS=: 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@21 -- # val= 00:13:57.818 11:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # IFS=: 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@21 -- # val= 00:13:57.818 11:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # IFS=: 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@21 -- # val= 00:13:57.818 11:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # IFS=: 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@21 -- # val= 00:13:57.818 11:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # IFS=: 00:13:57.818 11:58:02 -- accel/accel.sh@20 -- # read -r var val 00:13:57.818 11:58:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:57.818 11:58:02 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:13:57.818 11:58:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:57.818 00:13:57.818 real 0m3.054s 00:13:57.818 user 0m2.581s 00:13:57.818 sys 0m0.310s 00:13:57.818 11:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:57.818 11:58:02 -- common/autotest_common.sh@10 -- # set +x 00:13:57.818 ************************************ 00:13:57.818 END TEST accel_fill 00:13:57.818 ************************************ 00:13:57.818 11:58:02 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:57.818 11:58:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:57.818 11:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.818 11:58:02 -- common/autotest_common.sh@10 -- # set +x 00:13:57.818 ************************************ 00:13:57.818 START TEST accel_copy_crc32c 00:13:57.818 ************************************ 00:13:57.818 11:58:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:13:57.818 11:58:02 -- accel/accel.sh@16 -- # local accel_opc 00:13:57.818 11:58:02 -- accel/accel.sh@17 -- # local accel_module 00:13:57.818 11:58:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:57.818 11:58:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:57.818 11:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:13:57.818 11:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:57.818 11:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:57.818 11:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:57.818 11:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:57.818 11:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:57.818 11:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:13:57.818 11:58:02 -- accel/accel.sh@42 -- # jq -r . 00:13:57.818 [2024-11-29 11:58:03.004927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:57.818 [2024-11-29 11:58:03.005328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118618 ] 00:13:57.818 [2024-11-29 11:58:03.154059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.818 [2024-11-29 11:58:03.231459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.191 11:58:04 -- accel/accel.sh@18 -- # out=' 00:13:59.191 SPDK Configuration: 00:13:59.191 Core mask: 0x1 00:13:59.191 00:13:59.191 Accel Perf Configuration: 00:13:59.191 Workload Type: copy_crc32c 00:13:59.191 CRC-32C seed: 0 00:13:59.191 Vector size: 4096 bytes 00:13:59.191 Transfer size: 4096 bytes 00:13:59.191 Vector count 1 00:13:59.191 Module: software 00:13:59.191 Queue depth: 32 00:13:59.191 Allocate depth: 32 00:13:59.191 # threads/core: 1 00:13:59.191 Run time: 1 seconds 00:13:59.191 Verify: Yes 00:13:59.191 00:13:59.191 Running for 1 seconds... 00:13:59.191 00:13:59.191 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:59.191 ------------------------------------------------------------------------------------ 00:13:59.191 0,0 208832/s 815 MiB/s 0 0 00:13:59.191 ==================================================================================== 00:13:59.191 Total 208832/s 815 MiB/s 0 0' 00:13:59.191 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.191 11:58:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:59.191 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.191 11:58:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:59.191 11:58:04 -- accel/accel.sh@12 -- # build_accel_config 00:13:59.191 11:58:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:59.191 11:58:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:59.191 11:58:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:59.191 11:58:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:59.191 11:58:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:59.191 11:58:04 -- accel/accel.sh@41 -- # local IFS=, 00:13:59.191 11:58:04 -- accel/accel.sh@42 -- # jq -r . 00:13:59.191 [2024-11-29 11:58:04.510541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:59.191 [2024-11-29 11:58:04.510967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118646 ] 00:13:59.191 [2024-11-29 11:58:04.658084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.449 [2024-11-29 11:58:04.748021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=0x1 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=copy_crc32c 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=0 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=software 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@23 -- # accel_module=software 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=32 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=32 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=1 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val=Yes 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:13:59.449 11:58:04 -- accel/accel.sh@21 -- # val= 00:13:59.449 11:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # IFS=: 00:13:59.449 11:58:04 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@21 -- # val= 00:14:00.823 11:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # IFS=: 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@21 -- # val= 00:14:00.823 11:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # IFS=: 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@21 -- # val= 00:14:00.823 11:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # IFS=: 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@21 -- # val= 00:14:00.823 11:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # IFS=: 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@21 -- # val= 00:14:00.823 11:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # IFS=: 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@21 -- # val= 00:14:00.823 11:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # IFS=: 00:14:00.823 11:58:06 -- accel/accel.sh@20 -- # read -r var val 00:14:00.823 11:58:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:00.823 11:58:06 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:14:00.823 11:58:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:00.823 00:14:00.823 real 0m3.046s 00:14:00.823 user 0m2.590s 00:14:00.823 sys 0m0.287s 00:14:00.823 11:58:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:00.823 11:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:00.823 ************************************ 00:14:00.823 END TEST accel_copy_crc32c 00:14:00.823 ************************************ 00:14:00.823 11:58:06 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:14:00.823 11:58:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:00.823 11:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.823 11:58:06 -- common/autotest_common.sh@10 -- # set +x 00:14:00.823 ************************************ 00:14:00.823 START TEST accel_copy_crc32c_C2 00:14:00.823 ************************************ 00:14:00.823 11:58:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:14:00.823 11:58:06 -- accel/accel.sh@16 -- # local accel_opc 00:14:00.823 11:58:06 -- accel/accel.sh@17 -- # local accel_module 00:14:00.823 11:58:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:14:00.823 11:58:06 -- accel/accel.sh@12 -- # build_accel_config 00:14:00.823 11:58:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:14:00.823 11:58:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:00.823 11:58:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:00.823 11:58:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:00.823 11:58:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:00.823 11:58:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:00.823 11:58:06 -- accel/accel.sh@41 -- # local IFS=, 00:14:00.823 11:58:06 -- accel/accel.sh@42 -- # jq -r . 00:14:00.823 [2024-11-29 11:58:06.111814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:00.823 [2024-11-29 11:58:06.112210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118686 ] 00:14:00.823 [2024-11-29 11:58:06.262307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.081 [2024-11-29 11:58:06.351804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.454 11:58:07 -- accel/accel.sh@18 -- # out=' 00:14:02.454 SPDK Configuration: 00:14:02.454 Core mask: 0x1 00:14:02.454 00:14:02.454 Accel Perf Configuration: 00:14:02.454 Workload Type: copy_crc32c 00:14:02.454 CRC-32C seed: 0 00:14:02.454 Vector size: 4096 bytes 00:14:02.454 Transfer size: 8192 bytes 00:14:02.454 Vector count 2 00:14:02.454 Module: software 00:14:02.454 Queue depth: 32 00:14:02.454 Allocate depth: 32 00:14:02.454 # threads/core: 1 00:14:02.454 Run time: 1 seconds 00:14:02.454 Verify: Yes 00:14:02.454 00:14:02.454 Running for 1 seconds... 00:14:02.454 00:14:02.454 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:02.454 ------------------------------------------------------------------------------------ 00:14:02.454 0,0 140896/s 1100 MiB/s 0 0 00:14:02.454 ==================================================================================== 00:14:02.454 Total 140896/s 550 MiB/s 0 0' 00:14:02.454 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.454 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.454 11:58:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:14:02.454 11:58:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:14:02.454 11:58:07 -- accel/accel.sh@12 -- # build_accel_config 00:14:02.454 11:58:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:02.454 11:58:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:02.454 11:58:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:02.454 11:58:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:02.454 11:58:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:02.454 11:58:07 -- accel/accel.sh@41 -- # local IFS=, 00:14:02.454 11:58:07 -- accel/accel.sh@42 -- # jq -r . 00:14:02.454 [2024-11-29 11:58:07.648504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:02.454 [2024-11-29 11:58:07.648945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118721 ] 00:14:02.454 [2024-11-29 11:58:07.798538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.455 [2024-11-29 11:58:07.891004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=0x1 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=0 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val='8192 bytes' 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=software 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@23 -- # accel_module=software 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=32 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=32 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=1 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val=Yes 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:02.713 11:58:07 -- accel/accel.sh@21 -- # val= 00:14:02.713 11:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # IFS=: 00:14:02.713 11:58:07 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 11:58:09 -- accel/accel.sh@21 -- # val= 00:14:03.645 11:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # IFS=: 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 11:58:09 -- accel/accel.sh@21 -- # val= 00:14:03.645 11:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # IFS=: 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 11:58:09 -- accel/accel.sh@21 -- # val= 00:14:03.645 11:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # IFS=: 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 11:58:09 -- accel/accel.sh@21 -- # val= 00:14:03.645 11:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # IFS=: 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 11:58:09 -- accel/accel.sh@21 -- # val= 00:14:03.645 11:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # IFS=: 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 11:58:09 -- accel/accel.sh@21 -- # val= 00:14:03.645 11:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # IFS=: 00:14:03.645 11:58:09 -- accel/accel.sh@20 -- # read -r var val 00:14:03.645 ************************************ 00:14:03.645 END TEST accel_copy_crc32c_C2 00:14:03.645 ************************************ 00:14:03.645 11:58:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:03.645 11:58:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:14:03.645 11:58:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:03.645 00:14:03.645 real 0m3.070s 00:14:03.645 user 0m2.595s 00:14:03.645 sys 0m0.304s 00:14:03.645 11:58:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:03.645 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:14:03.902 11:58:09 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:14:03.902 11:58:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:03.902 11:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.902 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:14:03.902 ************************************ 00:14:03.902 START TEST accel_dualcast 00:14:03.902 ************************************ 00:14:03.902 11:58:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:14:03.902 11:58:09 -- accel/accel.sh@16 -- # local accel_opc 00:14:03.902 11:58:09 -- accel/accel.sh@17 -- # local accel_module 00:14:03.902 11:58:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:14:03.902 11:58:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:14:03.902 11:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:14:03.902 11:58:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:03.902 11:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:03.902 11:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:03.902 11:58:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:03.902 11:58:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:03.902 11:58:09 -- accel/accel.sh@41 -- # local IFS=, 00:14:03.902 11:58:09 -- accel/accel.sh@42 -- # jq -r . 00:14:03.902 [2024-11-29 11:58:09.225246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:03.902 [2024-11-29 11:58:09.226029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118754 ] 00:14:03.902 [2024-11-29 11:58:09.368474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.158 [2024-11-29 11:58:09.457428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.529 11:58:10 -- accel/accel.sh@18 -- # out=' 00:14:05.529 SPDK Configuration: 00:14:05.529 Core mask: 0x1 00:14:05.529 00:14:05.529 Accel Perf Configuration: 00:14:05.529 Workload Type: dualcast 00:14:05.529 Transfer size: 4096 bytes 00:14:05.529 Vector count 1 00:14:05.529 Module: software 00:14:05.529 Queue depth: 32 00:14:05.529 Allocate depth: 32 00:14:05.529 # threads/core: 1 00:14:05.529 Run time: 1 seconds 00:14:05.529 Verify: Yes 00:14:05.529 00:14:05.529 Running for 1 seconds... 00:14:05.529 00:14:05.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:05.529 ------------------------------------------------------------------------------------ 00:14:05.529 0,0 264864/s 1034 MiB/s 0 0 00:14:05.529 ==================================================================================== 00:14:05.529 Total 264864/s 1034 MiB/s 0 0' 00:14:05.529 11:58:10 -- accel/accel.sh@20 -- # IFS=: 00:14:05.529 11:58:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:14:05.529 11:58:10 -- accel/accel.sh@20 -- # read -r var val 00:14:05.529 11:58:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:14:05.529 11:58:10 -- accel/accel.sh@12 -- # build_accel_config 00:14:05.529 11:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:05.529 11:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:05.529 11:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:05.529 11:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:05.529 11:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:05.529 11:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:14:05.529 11:58:10 -- accel/accel.sh@42 -- # jq -r . 00:14:05.529 [2024-11-29 11:58:10.730027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:05.529 [2024-11-29 11:58:10.732185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118788 ] 00:14:05.529 [2024-11-29 11:58:10.896526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.529 [2024-11-29 11:58:11.008205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=0x1 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=dualcast 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=software 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@23 -- # accel_module=software 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=32 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=32 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=1 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.787 11:58:11 -- accel/accel.sh@21 -- # val=Yes 00:14:05.787 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.787 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.788 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.788 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.788 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.788 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.788 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:05.788 11:58:11 -- accel/accel.sh@21 -- # val= 00:14:05.788 11:58:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:05.788 11:58:11 -- accel/accel.sh@20 -- # IFS=: 00:14:05.788 11:58:11 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 11:58:12 -- accel/accel.sh@21 -- # val= 00:14:07.161 11:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # IFS=: 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 11:58:12 -- accel/accel.sh@21 -- # val= 00:14:07.161 11:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # IFS=: 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 11:58:12 -- accel/accel.sh@21 -- # val= 00:14:07.161 11:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # IFS=: 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 11:58:12 -- accel/accel.sh@21 -- # val= 00:14:07.161 11:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # IFS=: 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 11:58:12 -- accel/accel.sh@21 -- # val= 00:14:07.161 11:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # IFS=: 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 11:58:12 -- accel/accel.sh@21 -- # val= 00:14:07.161 11:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # IFS=: 00:14:07.161 11:58:12 -- accel/accel.sh@20 -- # read -r var val 00:14:07.161 ************************************ 00:14:07.161 END TEST accel_dualcast 00:14:07.161 ************************************ 00:14:07.161 11:58:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:07.161 11:58:12 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:14:07.161 11:58:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:07.161 00:14:07.161 real 0m3.073s 00:14:07.161 user 0m2.597s 00:14:07.161 sys 0m0.291s 00:14:07.161 11:58:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:07.161 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:14:07.161 11:58:12 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:14:07.161 11:58:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:07.161 11:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.161 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:14:07.161 ************************************ 00:14:07.161 START TEST accel_compare 00:14:07.161 ************************************ 00:14:07.161 11:58:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:14:07.161 11:58:12 -- accel/accel.sh@16 -- # local accel_opc 00:14:07.161 11:58:12 -- accel/accel.sh@17 -- # local accel_module 00:14:07.161 11:58:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:14:07.161 11:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:14:07.161 11:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:14:07.161 11:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:07.161 11:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:07.161 11:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:07.161 11:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:07.161 11:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:07.161 11:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:14:07.161 11:58:12 -- accel/accel.sh@42 -- # jq -r . 00:14:07.161 [2024-11-29 11:58:12.349488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:07.161 [2024-11-29 11:58:12.350221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118822 ] 00:14:07.161 [2024-11-29 11:58:12.489175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.161 [2024-11-29 11:58:12.564415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.535 11:58:13 -- accel/accel.sh@18 -- # out=' 00:14:08.535 SPDK Configuration: 00:14:08.535 Core mask: 0x1 00:14:08.535 00:14:08.535 Accel Perf Configuration: 00:14:08.535 Workload Type: compare 00:14:08.535 Transfer size: 4096 bytes 00:14:08.535 Vector count 1 00:14:08.535 Module: software 00:14:08.535 Queue depth: 32 00:14:08.535 Allocate depth: 32 00:14:08.535 # threads/core: 1 00:14:08.535 Run time: 1 seconds 00:14:08.535 Verify: Yes 00:14:08.535 00:14:08.535 Running for 1 seconds... 00:14:08.535 00:14:08.535 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:08.535 ------------------------------------------------------------------------------------ 00:14:08.535 0,0 379872/s 1483 MiB/s 0 0 00:14:08.535 ==================================================================================== 00:14:08.535 Total 379872/s 1483 MiB/s 0 0' 00:14:08.535 11:58:13 -- accel/accel.sh@20 -- # IFS=: 00:14:08.535 11:58:13 -- accel/accel.sh@20 -- # read -r var val 00:14:08.535 11:58:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:14:08.535 11:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:14:08.535 11:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:14:08.535 11:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:08.535 11:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:08.535 11:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:08.535 11:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:08.535 11:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:08.535 11:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:14:08.535 11:58:13 -- accel/accel.sh@42 -- # jq -r . 00:14:08.535 [2024-11-29 11:58:13.838887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:08.535 [2024-11-29 11:58:13.839313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118854 ] 00:14:08.535 [2024-11-29 11:58:13.986782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.794 [2024-11-29 11:58:14.089499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=0x1 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=compare 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@24 -- # accel_opc=compare 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=software 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@23 -- # accel_module=software 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=32 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=32 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=1 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val=Yes 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:08.794 11:58:14 -- accel/accel.sh@21 -- # val= 00:14:08.794 11:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # IFS=: 00:14:08.794 11:58:14 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 11:58:15 -- accel/accel.sh@21 -- # val= 00:14:10.208 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 11:58:15 -- accel/accel.sh@21 -- # val= 00:14:10.208 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 11:58:15 -- accel/accel.sh@21 -- # val= 00:14:10.208 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 11:58:15 -- accel/accel.sh@21 -- # val= 00:14:10.208 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 11:58:15 -- accel/accel.sh@21 -- # val= 00:14:10.208 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 11:58:15 -- accel/accel.sh@21 -- # val= 00:14:10.208 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:14:10.208 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:14:10.208 ************************************ 00:14:10.208 END TEST accel_compare 00:14:10.208 ************************************ 00:14:10.208 11:58:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:10.208 11:58:15 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:14:10.208 11:58:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:10.208 00:14:10.208 real 0m3.034s 00:14:10.208 user 0m2.578s 00:14:10.208 sys 0m0.276s 00:14:10.208 11:58:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:10.208 11:58:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.208 11:58:15 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:14:10.208 11:58:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:10.208 11:58:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.208 11:58:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.208 ************************************ 00:14:10.208 START TEST accel_xor 00:14:10.208 ************************************ 00:14:10.208 11:58:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:14:10.208 11:58:15 -- accel/accel.sh@16 -- # local accel_opc 00:14:10.208 11:58:15 -- accel/accel.sh@17 -- # local accel_module 00:14:10.208 11:58:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:14:10.208 11:58:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:10.208 11:58:15 -- accel/accel.sh@12 -- # build_accel_config 00:14:10.208 11:58:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:10.208 11:58:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:10.208 11:58:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:10.208 11:58:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:10.208 11:58:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:10.208 11:58:15 -- accel/accel.sh@41 -- # local IFS=, 00:14:10.208 11:58:15 -- accel/accel.sh@42 -- # jq -r . 00:14:10.208 [2024-11-29 11:58:15.435259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:10.208 [2024-11-29 11:58:15.435713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118892 ] 00:14:10.208 [2024-11-29 11:58:15.586819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.208 [2024-11-29 11:58:15.675065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.581 11:58:16 -- accel/accel.sh@18 -- # out=' 00:14:11.581 SPDK Configuration: 00:14:11.581 Core mask: 0x1 00:14:11.581 00:14:11.581 Accel Perf Configuration: 00:14:11.581 Workload Type: xor 00:14:11.581 Source buffers: 2 00:14:11.581 Transfer size: 4096 bytes 00:14:11.581 Vector count 1 00:14:11.581 Module: software 00:14:11.581 Queue depth: 32 00:14:11.581 Allocate depth: 32 00:14:11.581 # threads/core: 1 00:14:11.581 Run time: 1 seconds 00:14:11.581 Verify: Yes 00:14:11.581 00:14:11.581 Running for 1 seconds... 00:14:11.581 00:14:11.581 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:11.581 ------------------------------------------------------------------------------------ 00:14:11.581 0,0 209504/s 818 MiB/s 0 0 00:14:11.581 ==================================================================================== 00:14:11.581 Total 209504/s 818 MiB/s 0 0' 00:14:11.581 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:14:11.581 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:14:11.582 11:58:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:14:11.582 11:58:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:11.582 11:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:14:11.582 11:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:11.582 11:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:11.582 11:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:11.582 11:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:11.582 11:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:11.582 11:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:14:11.582 11:58:16 -- accel/accel.sh@42 -- # jq -r . 00:14:11.582 [2024-11-29 11:58:16.968218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:11.582 [2024-11-29 11:58:16.968643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118922 ] 00:14:11.840 [2024-11-29 11:58:17.119963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.840 [2024-11-29 11:58:17.226274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=0x1 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=xor 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=2 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=software 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@23 -- # accel_module=software 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=32 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=32 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=1 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val=Yes 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:11.840 11:58:17 -- accel/accel.sh@21 -- # val= 00:14:11.840 11:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # IFS=: 00:14:11.840 11:58:17 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 11:58:18 -- accel/accel.sh@21 -- # val= 00:14:13.212 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 11:58:18 -- accel/accel.sh@21 -- # val= 00:14:13.212 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 11:58:18 -- accel/accel.sh@21 -- # val= 00:14:13.212 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 11:58:18 -- accel/accel.sh@21 -- # val= 00:14:13.212 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 11:58:18 -- accel/accel.sh@21 -- # val= 00:14:13.212 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 11:58:18 -- accel/accel.sh@21 -- # val= 00:14:13.212 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:14:13.212 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:14:13.212 ************************************ 00:14:13.212 END TEST accel_xor 00:14:13.212 ************************************ 00:14:13.212 11:58:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:13.212 11:58:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:14:13.212 11:58:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:13.212 00:14:13.212 real 0m3.109s 00:14:13.212 user 0m2.629s 00:14:13.212 sys 0m0.292s 00:14:13.212 11:58:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:13.212 11:58:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.212 11:58:18 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:14:13.212 11:58:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:13.212 11:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.212 11:58:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.212 ************************************ 00:14:13.212 START TEST accel_xor 00:14:13.212 ************************************ 00:14:13.212 11:58:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:14:13.212 11:58:18 -- accel/accel.sh@16 -- # local accel_opc 00:14:13.212 11:58:18 -- accel/accel.sh@17 -- # local accel_module 00:14:13.212 11:58:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:14:13.212 11:58:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:13.212 11:58:18 -- accel/accel.sh@12 -- # build_accel_config 00:14:13.212 11:58:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:13.212 11:58:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:13.212 11:58:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:13.212 11:58:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:13.212 11:58:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:13.212 11:58:18 -- accel/accel.sh@41 -- # local IFS=, 00:14:13.212 11:58:18 -- accel/accel.sh@42 -- # jq -r . 00:14:13.212 [2024-11-29 11:58:18.594275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:13.212 [2024-11-29 11:58:18.594698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118960 ] 00:14:13.518 [2024-11-29 11:58:18.743512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.518 [2024-11-29 11:58:18.831935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.892 11:58:20 -- accel/accel.sh@18 -- # out=' 00:14:14.892 SPDK Configuration: 00:14:14.892 Core mask: 0x1 00:14:14.892 00:14:14.892 Accel Perf Configuration: 00:14:14.892 Workload Type: xor 00:14:14.892 Source buffers: 3 00:14:14.892 Transfer size: 4096 bytes 00:14:14.892 Vector count 1 00:14:14.892 Module: software 00:14:14.892 Queue depth: 32 00:14:14.892 Allocate depth: 32 00:14:14.892 # threads/core: 1 00:14:14.892 Run time: 1 seconds 00:14:14.892 Verify: Yes 00:14:14.892 00:14:14.892 Running for 1 seconds... 00:14:14.892 00:14:14.892 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:14.892 ------------------------------------------------------------------------------------ 00:14:14.892 0,0 190624/s 744 MiB/s 0 0 00:14:14.892 ==================================================================================== 00:14:14.892 Total 190624/s 744 MiB/s 0 0' 00:14:14.892 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:14.892 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:14.892 11:58:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:14:14.892 11:58:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:14.892 11:58:20 -- accel/accel.sh@12 -- # build_accel_config 00:14:14.892 11:58:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:14.892 11:58:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:14.892 11:58:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:14.892 11:58:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:14.892 11:58:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:14.892 11:58:20 -- accel/accel.sh@41 -- # local IFS=, 00:14:14.892 11:58:20 -- accel/accel.sh@42 -- # jq -r . 00:14:14.892 [2024-11-29 11:58:20.116152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:14.892 [2024-11-29 11:58:20.116618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118988 ] 00:14:14.892 [2024-11-29 11:58:20.266304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.892 [2024-11-29 11:58:20.359556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=0x1 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=xor 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@24 -- # accel_opc=xor 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=3 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=software 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@23 -- # accel_module=software 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=32 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=32 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=1 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val=Yes 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:15.150 11:58:20 -- accel/accel.sh@21 -- # val= 00:14:15.150 11:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:14:15.150 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 11:58:21 -- accel/accel.sh@21 -- # val= 00:14:16.523 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 11:58:21 -- accel/accel.sh@21 -- # val= 00:14:16.523 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 11:58:21 -- accel/accel.sh@21 -- # val= 00:14:16.523 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 11:58:21 -- accel/accel.sh@21 -- # val= 00:14:16.523 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 11:58:21 -- accel/accel.sh@21 -- # val= 00:14:16.523 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 11:58:21 -- accel/accel.sh@21 -- # val= 00:14:16.523 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:14:16.523 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:14:16.523 ************************************ 00:14:16.523 END TEST accel_xor 00:14:16.523 ************************************ 00:14:16.523 11:58:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:16.523 11:58:21 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:14:16.523 11:58:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:16.523 00:14:16.523 real 0m3.058s 00:14:16.523 user 0m2.588s 00:14:16.523 sys 0m0.308s 00:14:16.523 11:58:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:16.523 11:58:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.523 11:58:21 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:14:16.523 11:58:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:16.523 11:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.523 11:58:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.523 ************************************ 00:14:16.523 START TEST accel_dif_verify 00:14:16.523 ************************************ 00:14:16.523 11:58:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:14:16.523 11:58:21 -- accel/accel.sh@16 -- # local accel_opc 00:14:16.523 11:58:21 -- accel/accel.sh@17 -- # local accel_module 00:14:16.523 11:58:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:14:16.523 11:58:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:16.523 11:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:14:16.523 11:58:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:16.523 11:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:16.523 11:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:16.523 11:58:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:16.523 11:58:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:16.523 11:58:21 -- accel/accel.sh@41 -- # local IFS=, 00:14:16.523 11:58:21 -- accel/accel.sh@42 -- # jq -r . 00:14:16.523 [2024-11-29 11:58:21.700749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:16.523 [2024-11-29 11:58:21.701142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119027 ] 00:14:16.523 [2024-11-29 11:58:21.850340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.523 [2024-11-29 11:58:21.931330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.895 11:58:23 -- accel/accel.sh@18 -- # out=' 00:14:17.895 SPDK Configuration: 00:14:17.895 Core mask: 0x1 00:14:17.895 00:14:17.895 Accel Perf Configuration: 00:14:17.895 Workload Type: dif_verify 00:14:17.895 Vector size: 4096 bytes 00:14:17.895 Transfer size: 4096 bytes 00:14:17.895 Block size: 512 bytes 00:14:17.895 Metadata size: 8 bytes 00:14:17.895 Vector count 1 00:14:17.895 Module: software 00:14:17.895 Queue depth: 32 00:14:17.895 Allocate depth: 32 00:14:17.895 # threads/core: 1 00:14:17.895 Run time: 1 seconds 00:14:17.895 Verify: No 00:14:17.895 00:14:17.895 Running for 1 seconds... 00:14:17.895 00:14:17.895 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:17.895 ------------------------------------------------------------------------------------ 00:14:17.895 0,0 93824/s 372 MiB/s 0 0 00:14:17.895 ==================================================================================== 00:14:17.895 Total 93824/s 366 MiB/s 0 0' 00:14:17.895 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:17.895 11:58:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:14:17.895 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:17.895 11:58:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:17.895 11:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:14:17.895 11:58:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:17.895 11:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:17.895 11:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:17.895 11:58:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:17.895 11:58:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:17.895 11:58:23 -- accel/accel.sh@41 -- # local IFS=, 00:14:17.895 11:58:23 -- accel/accel.sh@42 -- # jq -r . 00:14:17.895 [2024-11-29 11:58:23.222236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:17.895 [2024-11-29 11:58:23.222672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119058 ] 00:14:17.895 [2024-11-29 11:58:23.372477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.153 [2024-11-29 11:58:23.484260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val=0x1 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val=dif_verify 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val='512 bytes' 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.153 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.153 11:58:23 -- accel/accel.sh@21 -- # val='8 bytes' 00:14:18.153 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val=software 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@23 -- # accel_module=software 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val=32 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val=32 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val=1 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val=No 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:18.154 11:58:23 -- accel/accel.sh@21 -- # val= 00:14:18.154 11:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:14:18.154 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@21 -- # val= 00:14:19.526 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@21 -- # val= 00:14:19.526 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@21 -- # val= 00:14:19.526 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@21 -- # val= 00:14:19.526 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@21 -- # val= 00:14:19.526 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@21 -- # val= 00:14:19.526 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:14:19.526 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:14:19.526 11:58:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:19.526 11:58:24 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:14:19.527 11:58:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:19.527 00:14:19.527 real 0m3.092s 00:14:19.527 user 0m2.581s 00:14:19.527 sys 0m0.333s 00:14:19.527 11:58:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:19.527 11:58:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.527 ************************************ 00:14:19.527 END TEST accel_dif_verify 00:14:19.527 ************************************ 00:14:19.527 11:58:24 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:14:19.527 11:58:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:19.527 11:58:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.527 11:58:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.527 ************************************ 00:14:19.527 START TEST accel_dif_generate 00:14:19.527 ************************************ 00:14:19.527 11:58:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:14:19.527 11:58:24 -- accel/accel.sh@16 -- # local accel_opc 00:14:19.527 11:58:24 -- accel/accel.sh@17 -- # local accel_module 00:14:19.527 11:58:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:14:19.527 11:58:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:19.527 11:58:24 -- accel/accel.sh@12 -- # build_accel_config 00:14:19.527 11:58:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:19.527 11:58:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:19.527 11:58:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:19.527 11:58:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:19.527 11:58:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:19.527 11:58:24 -- accel/accel.sh@41 -- # local IFS=, 00:14:19.527 11:58:24 -- accel/accel.sh@42 -- # jq -r . 00:14:19.527 [2024-11-29 11:58:24.848278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:19.527 [2024-11-29 11:58:24.848651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119091 ] 00:14:19.527 [2024-11-29 11:58:24.999697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.784 [2024-11-29 11:58:25.091753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.158 11:58:26 -- accel/accel.sh@18 -- # out=' 00:14:21.158 SPDK Configuration: 00:14:21.158 Core mask: 0x1 00:14:21.158 00:14:21.158 Accel Perf Configuration: 00:14:21.158 Workload Type: dif_generate 00:14:21.158 Vector size: 4096 bytes 00:14:21.158 Transfer size: 4096 bytes 00:14:21.158 Block size: 512 bytes 00:14:21.158 Metadata size: 8 bytes 00:14:21.158 Vector count 1 00:14:21.158 Module: software 00:14:21.158 Queue depth: 32 00:14:21.158 Allocate depth: 32 00:14:21.158 # threads/core: 1 00:14:21.158 Run time: 1 seconds 00:14:21.158 Verify: No 00:14:21.158 00:14:21.158 Running for 1 seconds... 00:14:21.158 00:14:21.158 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:21.158 ------------------------------------------------------------------------------------ 00:14:21.158 0,0 108128/s 428 MiB/s 0 0 00:14:21.158 ==================================================================================== 00:14:21.158 Total 108128/s 422 MiB/s 0 0' 00:14:21.158 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.158 11:58:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:14:21.158 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.158 11:58:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:21.158 11:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:14:21.158 11:58:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:21.158 11:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:21.158 11:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:21.158 11:58:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:21.158 11:58:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:21.159 11:58:26 -- accel/accel.sh@41 -- # local IFS=, 00:14:21.159 11:58:26 -- accel/accel.sh@42 -- # jq -r . 00:14:21.159 [2024-11-29 11:58:26.404615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:21.159 [2024-11-29 11:58:26.405042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119126 ] 00:14:21.159 [2024-11-29 11:58:26.554902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.159 [2024-11-29 11:58:26.649517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val=0x1 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val=dif_generate 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.417 11:58:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:14:21.417 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.417 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val=software 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@23 -- # accel_module=software 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val=32 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val=32 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val=1 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val=No 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:21.418 11:58:26 -- accel/accel.sh@21 -- # val= 00:14:21.418 11:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:14:21.418 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@21 -- # val= 00:14:22.792 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@21 -- # val= 00:14:22.792 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@21 -- # val= 00:14:22.792 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@21 -- # val= 00:14:22.792 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@21 -- # val= 00:14:22.792 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@21 -- # val= 00:14:22.792 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:14:22.792 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:14:22.792 11:58:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:22.792 11:58:27 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:14:22.792 11:58:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:22.792 00:14:22.792 real 0m3.116s 00:14:22.792 user 0m2.630s 00:14:22.792 sys 0m0.316s 00:14:22.792 11:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:22.792 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:14:22.792 ************************************ 00:14:22.792 END TEST accel_dif_generate 00:14:22.792 ************************************ 00:14:22.792 11:58:27 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:14:22.792 11:58:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:22.792 11:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.792 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:14:22.792 ************************************ 00:14:22.792 START TEST accel_dif_generate_copy 00:14:22.792 ************************************ 00:14:22.792 11:58:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:14:22.792 11:58:27 -- accel/accel.sh@16 -- # local accel_opc 00:14:22.792 11:58:27 -- accel/accel.sh@17 -- # local accel_module 00:14:22.792 11:58:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:14:22.792 11:58:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:22.792 11:58:27 -- accel/accel.sh@12 -- # build_accel_config 00:14:22.792 11:58:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:22.792 11:58:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:22.792 11:58:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:22.792 11:58:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:22.792 11:58:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:22.792 11:58:27 -- accel/accel.sh@41 -- # local IFS=, 00:14:22.792 11:58:27 -- accel/accel.sh@42 -- # jq -r . 00:14:22.792 [2024-11-29 11:58:28.015233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:22.792 [2024-11-29 11:58:28.015669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119166 ] 00:14:22.792 [2024-11-29 11:58:28.166442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.792 [2024-11-29 11:58:28.257456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.168 11:58:29 -- accel/accel.sh@18 -- # out=' 00:14:24.168 SPDK Configuration: 00:14:24.168 Core mask: 0x1 00:14:24.168 00:14:24.168 Accel Perf Configuration: 00:14:24.168 Workload Type: dif_generate_copy 00:14:24.168 Vector size: 4096 bytes 00:14:24.168 Transfer size: 4096 bytes 00:14:24.168 Vector count 1 00:14:24.168 Module: software 00:14:24.168 Queue depth: 32 00:14:24.168 Allocate depth: 32 00:14:24.168 # threads/core: 1 00:14:24.168 Run time: 1 seconds 00:14:24.168 Verify: No 00:14:24.168 00:14:24.168 Running for 1 seconds... 00:14:24.168 00:14:24.168 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:24.168 ------------------------------------------------------------------------------------ 00:14:24.168 0,0 78784/s 312 MiB/s 0 0 00:14:24.168 ==================================================================================== 00:14:24.168 Total 78784/s 307 MiB/s 0 0' 00:14:24.168 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.168 11:58:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:14:24.168 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.168 11:58:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:24.168 11:58:29 -- accel/accel.sh@12 -- # build_accel_config 00:14:24.168 11:58:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:24.168 11:58:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:24.168 11:58:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:24.168 11:58:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:24.168 11:58:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:24.168 11:58:29 -- accel/accel.sh@41 -- # local IFS=, 00:14:24.168 11:58:29 -- accel/accel.sh@42 -- # jq -r . 00:14:24.168 [2024-11-29 11:58:29.562346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:24.168 [2024-11-29 11:58:29.563247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119194 ] 00:14:24.426 [2024-11-29 11:58:29.706720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.426 [2024-11-29 11:58:29.805001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val=0x1 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.426 11:58:29 -- accel/accel.sh@21 -- # val=software 00:14:24.426 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.426 11:58:29 -- accel/accel.sh@23 -- # accel_module=software 00:14:24.426 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val=32 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val=32 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val=1 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val=No 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:24.427 11:58:29 -- accel/accel.sh@21 -- # val= 00:14:24.427 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:14:24.427 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@21 -- # val= 00:14:25.802 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@21 -- # val= 00:14:25.802 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@21 -- # val= 00:14:25.802 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@21 -- # val= 00:14:25.802 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@21 -- # val= 00:14:25.802 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@21 -- # val= 00:14:25.802 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:14:25.802 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:14:25.802 11:58:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:25.802 11:58:31 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:14:25.802 11:58:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:25.802 00:14:25.802 real 0m3.104s 00:14:25.802 user 0m2.642s 00:14:25.802 sys 0m0.300s 00:14:25.802 11:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:25.802 11:58:31 -- common/autotest_common.sh@10 -- # set +x 00:14:25.802 ************************************ 00:14:25.802 END TEST accel_dif_generate_copy 00:14:25.802 ************************************ 00:14:25.802 11:58:31 -- accel/accel.sh@107 -- # [[ y == y ]] 00:14:25.802 11:58:31 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:25.802 11:58:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:14:25.802 11:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.802 11:58:31 -- common/autotest_common.sh@10 -- # set +x 00:14:25.802 ************************************ 00:14:25.802 START TEST accel_comp 00:14:25.802 ************************************ 00:14:25.802 11:58:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:25.802 11:58:31 -- accel/accel.sh@16 -- # local accel_opc 00:14:25.802 11:58:31 -- accel/accel.sh@17 -- # local accel_module 00:14:25.802 11:58:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:25.802 11:58:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:25.802 11:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:14:25.802 11:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:25.802 11:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:25.802 11:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:25.802 11:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:25.802 11:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:25.802 11:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:14:25.802 11:58:31 -- accel/accel.sh@42 -- # jq -r . 00:14:25.802 [2024-11-29 11:58:31.175670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:25.803 [2024-11-29 11:58:31.176444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119234 ] 00:14:26.063 [2024-11-29 11:58:31.314707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.063 [2024-11-29 11:58:31.385463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.435 11:58:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:27.435 00:14:27.435 SPDK Configuration: 00:14:27.435 Core mask: 0x1 00:14:27.435 00:14:27.435 Accel Perf Configuration: 00:14:27.435 Workload Type: compress 00:14:27.435 Transfer size: 4096 bytes 00:14:27.435 Vector count 1 00:14:27.435 Module: software 00:14:27.435 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:27.435 Queue depth: 32 00:14:27.435 Allocate depth: 32 00:14:27.435 # threads/core: 1 00:14:27.435 Run time: 1 seconds 00:14:27.435 Verify: No 00:14:27.435 00:14:27.435 Running for 1 seconds... 00:14:27.435 00:14:27.435 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:27.435 ------------------------------------------------------------------------------------ 00:14:27.435 0,0 43360/s 180 MiB/s 0 0 00:14:27.435 ==================================================================================== 00:14:27.435 Total 43360/s 169 MiB/s 0 0' 00:14:27.435 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:14:27.435 11:58:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:27.435 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:14:27.435 11:58:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:27.435 11:58:32 -- accel/accel.sh@12 -- # build_accel_config 00:14:27.435 11:58:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:27.435 11:58:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:27.435 11:58:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:27.435 11:58:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:27.435 11:58:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:27.435 11:58:32 -- accel/accel.sh@41 -- # local IFS=, 00:14:27.435 11:58:32 -- accel/accel.sh@42 -- # jq -r . 00:14:27.435 [2024-11-29 11:58:32.697510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:27.435 [2024-11-29 11:58:32.697957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119262 ] 00:14:27.435 [2024-11-29 11:58:32.847931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.435 [2024-11-29 11:58:32.935793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val=0x1 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val=compress 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@24 -- # accel_opc=compress 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val=software 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@23 -- # accel_module=software 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.693 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.693 11:58:33 -- accel/accel.sh@21 -- # val=32 00:14:27.693 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.694 11:58:33 -- accel/accel.sh@21 -- # val=32 00:14:27.694 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.694 11:58:33 -- accel/accel.sh@21 -- # val=1 00:14:27.694 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.694 11:58:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:27.694 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.694 11:58:33 -- accel/accel.sh@21 -- # val=No 00:14:27.694 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.694 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.694 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:27.694 11:58:33 -- accel/accel.sh@21 -- # val= 00:14:27.694 11:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # IFS=: 00:14:27.694 11:58:33 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@21 -- # val= 00:14:29.069 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@21 -- # val= 00:14:29.069 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@21 -- # val= 00:14:29.069 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@21 -- # val= 00:14:29.069 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@21 -- # val= 00:14:29.069 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@21 -- # val= 00:14:29.069 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:14:29.069 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:14:29.069 11:58:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:29.069 11:58:34 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:14:29.069 11:58:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:29.069 00:14:29.069 real 0m3.077s 00:14:29.069 user 0m2.570s 00:14:29.069 sys 0m0.338s 00:14:29.069 11:58:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.069 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:14:29.070 ************************************ 00:14:29.070 END TEST accel_comp 00:14:29.070 ************************************ 00:14:29.070 11:58:34 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:29.070 11:58:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:29.070 11:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.070 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:14:29.070 ************************************ 00:14:29.070 START TEST accel_decomp 00:14:29.070 ************************************ 00:14:29.070 11:58:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:29.070 11:58:34 -- accel/accel.sh@16 -- # local accel_opc 00:14:29.070 11:58:34 -- accel/accel.sh@17 -- # local accel_module 00:14:29.070 11:58:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:29.070 11:58:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:29.070 11:58:34 -- accel/accel.sh@12 -- # build_accel_config 00:14:29.070 11:58:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:29.070 11:58:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:29.070 11:58:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:29.070 11:58:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:29.070 11:58:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:29.070 11:58:34 -- accel/accel.sh@41 -- # local IFS=, 00:14:29.070 11:58:34 -- accel/accel.sh@42 -- # jq -r . 00:14:29.070 [2024-11-29 11:58:34.313172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:29.070 [2024-11-29 11:58:34.313702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119304 ] 00:14:29.070 [2024-11-29 11:58:34.465091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.070 [2024-11-29 11:58:34.566633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.442 11:58:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:30.442 00:14:30.442 SPDK Configuration: 00:14:30.442 Core mask: 0x1 00:14:30.442 00:14:30.442 Accel Perf Configuration: 00:14:30.442 Workload Type: decompress 00:14:30.442 Transfer size: 4096 bytes 00:14:30.442 Vector count 1 00:14:30.442 Module: software 00:14:30.442 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:30.442 Queue depth: 32 00:14:30.442 Allocate depth: 32 00:14:30.442 # threads/core: 1 00:14:30.442 Run time: 1 seconds 00:14:30.442 Verify: Yes 00:14:30.442 00:14:30.442 Running for 1 seconds... 00:14:30.442 00:14:30.442 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:30.442 ------------------------------------------------------------------------------------ 00:14:30.442 0,0 58720/s 108 MiB/s 0 0 00:14:30.442 ==================================================================================== 00:14:30.442 Total 58720/s 229 MiB/s 0 0' 00:14:30.442 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:14:30.442 11:58:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:30.442 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:14:30.442 11:58:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:30.442 11:58:35 -- accel/accel.sh@12 -- # build_accel_config 00:14:30.442 11:58:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:30.442 11:58:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:30.442 11:58:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:30.442 11:58:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:30.442 11:58:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:30.442 11:58:35 -- accel/accel.sh@41 -- # local IFS=, 00:14:30.442 11:58:35 -- accel/accel.sh@42 -- # jq -r . 00:14:30.442 [2024-11-29 11:58:35.871152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:30.442 [2024-11-29 11:58:35.871562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119329 ] 00:14:30.700 [2024-11-29 11:58:36.019976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.700 [2024-11-29 11:58:36.112794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=0x1 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=decompress 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=software 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@23 -- # accel_module=software 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=32 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=32 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=1 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val=Yes 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:30.700 11:58:36 -- accel/accel.sh@21 -- # val= 00:14:30.700 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:14:30.700 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@21 -- # val= 00:14:32.089 11:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # IFS=: 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@21 -- # val= 00:14:32.089 11:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # IFS=: 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@21 -- # val= 00:14:32.089 11:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # IFS=: 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@21 -- # val= 00:14:32.089 11:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # IFS=: 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@21 -- # val= 00:14:32.089 11:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # IFS=: 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@21 -- # val= 00:14:32.089 11:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # IFS=: 00:14:32.089 11:58:37 -- accel/accel.sh@20 -- # read -r var val 00:14:32.089 11:58:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:32.089 11:58:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:32.089 11:58:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:32.090 00:14:32.090 real 0m3.116s 00:14:32.090 user 0m2.610s 00:14:32.090 sys 0m0.329s 00:14:32.090 11:58:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.090 11:58:37 -- common/autotest_common.sh@10 -- # set +x 00:14:32.090 ************************************ 00:14:32.090 END TEST accel_decomp 00:14:32.090 ************************************ 00:14:32.090 11:58:37 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:32.090 11:58:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:32.090 11:58:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.090 11:58:37 -- common/autotest_common.sh@10 -- # set +x 00:14:32.090 ************************************ 00:14:32.090 START TEST accel_decmop_full 00:14:32.090 ************************************ 00:14:32.090 11:58:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:32.090 11:58:37 -- accel/accel.sh@16 -- # local accel_opc 00:14:32.090 11:58:37 -- accel/accel.sh@17 -- # local accel_module 00:14:32.090 11:58:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:32.090 11:58:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:32.090 11:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:14:32.090 11:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:32.090 11:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:32.090 11:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:32.090 11:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:32.090 11:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:32.090 11:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:14:32.090 11:58:37 -- accel/accel.sh@42 -- # jq -r . 00:14:32.090 [2024-11-29 11:58:37.482564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:32.090 [2024-11-29 11:58:37.482972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119372 ] 00:14:32.348 [2024-11-29 11:58:37.632028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.348 [2024-11-29 11:58:37.715956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.722 11:58:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:33.722 00:14:33.722 SPDK Configuration: 00:14:33.722 Core mask: 0x1 00:14:33.722 00:14:33.722 Accel Perf Configuration: 00:14:33.722 Workload Type: decompress 00:14:33.722 Transfer size: 111250 bytes 00:14:33.722 Vector count 1 00:14:33.722 Module: software 00:14:33.722 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.722 Queue depth: 32 00:14:33.722 Allocate depth: 32 00:14:33.722 # threads/core: 1 00:14:33.722 Run time: 1 seconds 00:14:33.722 Verify: Yes 00:14:33.722 00:14:33.722 Running for 1 seconds... 00:14:33.722 00:14:33.722 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:33.722 ------------------------------------------------------------------------------------ 00:14:33.722 0,0 4416/s 182 MiB/s 0 0 00:14:33.722 ==================================================================================== 00:14:33.722 Total 4416/s 468 MiB/s 0 0' 00:14:33.722 11:58:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:33.722 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:14:33.722 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:14:33.722 11:58:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:33.722 11:58:38 -- accel/accel.sh@12 -- # build_accel_config 00:14:33.722 11:58:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:33.722 11:58:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:33.722 11:58:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:33.722 11:58:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:33.722 11:58:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:33.722 11:58:38 -- accel/accel.sh@41 -- # local IFS=, 00:14:33.722 11:58:38 -- accel/accel.sh@42 -- # jq -r . 00:14:33.722 [2024-11-29 11:58:39.023367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:33.722 [2024-11-29 11:58:39.023780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119395 ] 00:14:33.722 [2024-11-29 11:58:39.173275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.979 [2024-11-29 11:58:39.278827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=0x1 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=decompress 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=software 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@23 -- # accel_module=software 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=32 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=32 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=1 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val=Yes 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:33.979 11:58:39 -- accel/accel.sh@21 -- # val= 00:14:33.979 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:14:33.979 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@21 -- # val= 00:14:35.350 11:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # IFS=: 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@21 -- # val= 00:14:35.350 11:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # IFS=: 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@21 -- # val= 00:14:35.350 11:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # IFS=: 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@21 -- # val= 00:14:35.350 11:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # IFS=: 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@21 -- # val= 00:14:35.350 11:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # IFS=: 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@21 -- # val= 00:14:35.350 11:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # IFS=: 00:14:35.350 11:58:40 -- accel/accel.sh@20 -- # read -r var val 00:14:35.350 11:58:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:35.350 11:58:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:35.350 11:58:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:35.350 00:14:35.350 real 0m3.144s 00:14:35.350 user 0m2.679s 00:14:35.350 sys 0m0.309s 00:14:35.350 11:58:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:35.350 11:58:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.350 ************************************ 00:14:35.350 END TEST accel_decmop_full 00:14:35.350 ************************************ 00:14:35.350 11:58:40 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:35.350 11:58:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:35.350 11:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.350 11:58:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.350 ************************************ 00:14:35.350 START TEST accel_decomp_mcore 00:14:35.350 ************************************ 00:14:35.350 11:58:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:35.350 11:58:40 -- accel/accel.sh@16 -- # local accel_opc 00:14:35.350 11:58:40 -- accel/accel.sh@17 -- # local accel_module 00:14:35.350 11:58:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:35.350 11:58:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:35.350 11:58:40 -- accel/accel.sh@12 -- # build_accel_config 00:14:35.350 11:58:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:35.350 11:58:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:35.350 11:58:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:35.350 11:58:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:35.350 11:58:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:35.350 11:58:40 -- accel/accel.sh@41 -- # local IFS=, 00:14:35.350 11:58:40 -- accel/accel.sh@42 -- # jq -r . 00:14:35.350 [2024-11-29 11:58:40.677748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:35.350 [2024-11-29 11:58:40.678240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119440 ] 00:14:35.350 [2024-11-29 11:58:40.847280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.607 [2024-11-29 11:58:40.939949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.607 [2024-11-29 11:58:40.940088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.607 [2024-11-29 11:58:40.940217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.607 [2024-11-29 11:58:40.940326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.977 11:58:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:36.977 00:14:36.977 SPDK Configuration: 00:14:36.977 Core mask: 0xf 00:14:36.977 00:14:36.977 Accel Perf Configuration: 00:14:36.977 Workload Type: decompress 00:14:36.977 Transfer size: 4096 bytes 00:14:36.977 Vector count 1 00:14:36.977 Module: software 00:14:36.977 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:36.978 Queue depth: 32 00:14:36.978 Allocate depth: 32 00:14:36.978 # threads/core: 1 00:14:36.978 Run time: 1 seconds 00:14:36.978 Verify: Yes 00:14:36.978 00:14:36.978 Running for 1 seconds... 00:14:36.978 00:14:36.978 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:36.978 ------------------------------------------------------------------------------------ 00:14:36.978 0,0 50080/s 92 MiB/s 0 0 00:14:36.978 3,0 48320/s 89 MiB/s 0 0 00:14:36.978 2,0 47488/s 87 MiB/s 0 0 00:14:36.978 1,0 49280/s 90 MiB/s 0 0 00:14:36.978 ==================================================================================== 00:14:36.978 Total 195168/s 762 MiB/s 0 0' 00:14:36.978 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:36.978 11:58:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:36.978 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:36.978 11:58:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:36.978 11:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:14:36.978 11:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:36.978 11:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:36.978 11:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:36.978 11:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:36.978 11:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:36.978 11:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:14:36.978 11:58:42 -- accel/accel.sh@42 -- # jq -r . 00:14:36.978 [2024-11-29 11:58:42.226845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:36.978 [2024-11-29 11:58:42.227715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119466 ] 00:14:36.978 [2024-11-29 11:58:42.385292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.978 [2024-11-29 11:58:42.483202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.978 [2024-11-29 11:58:42.483309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.978 [2024-11-29 11:58:42.483440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.978 [2024-11-29 11:58:42.483443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.235 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.235 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.235 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.235 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.235 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.235 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=0xf 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=decompress 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=software 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@23 -- # accel_module=software 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=32 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=32 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=1 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val=Yes 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:37.236 11:58:42 -- accel/accel.sh@21 -- # val= 00:14:37.236 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:14:37.236 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@21 -- # val= 00:14:38.650 11:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # IFS=: 00:14:38.650 11:58:43 -- accel/accel.sh@20 -- # read -r var val 00:14:38.650 11:58:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:38.650 11:58:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:38.650 11:58:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:38.650 00:14:38.650 real 0m3.121s 00:14:38.650 user 0m9.584s 00:14:38.650 sys 0m0.336s 00:14:38.650 11:58:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.650 11:58:43 -- common/autotest_common.sh@10 -- # set +x 00:14:38.650 ************************************ 00:14:38.650 END TEST accel_decomp_mcore 00:14:38.650 ************************************ 00:14:38.650 11:58:43 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:38.650 11:58:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:38.650 11:58:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.650 11:58:43 -- common/autotest_common.sh@10 -- # set +x 00:14:38.650 ************************************ 00:14:38.650 START TEST accel_decomp_full_mcore 00:14:38.650 ************************************ 00:14:38.650 11:58:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:38.650 11:58:43 -- accel/accel.sh@16 -- # local accel_opc 00:14:38.650 11:58:43 -- accel/accel.sh@17 -- # local accel_module 00:14:38.650 11:58:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:38.650 11:58:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:38.650 11:58:43 -- accel/accel.sh@12 -- # build_accel_config 00:14:38.650 11:58:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:38.650 11:58:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:38.650 11:58:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:38.650 11:58:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:38.650 11:58:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:38.650 11:58:43 -- accel/accel.sh@41 -- # local IFS=, 00:14:38.650 11:58:43 -- accel/accel.sh@42 -- # jq -r . 00:14:38.650 [2024-11-29 11:58:43.844216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:38.650 [2024-11-29 11:58:43.844571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119517 ] 00:14:38.650 [2024-11-29 11:58:44.005667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.650 [2024-11-29 11:58:44.079624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.650 [2024-11-29 11:58:44.079779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.650 [2024-11-29 11:58:44.079880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.650 [2024-11-29 11:58:44.080435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.023 11:58:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:40.023 00:14:40.023 SPDK Configuration: 00:14:40.023 Core mask: 0xf 00:14:40.023 00:14:40.023 Accel Perf Configuration: 00:14:40.023 Workload Type: decompress 00:14:40.023 Transfer size: 111250 bytes 00:14:40.023 Vector count 1 00:14:40.023 Module: software 00:14:40.023 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:40.023 Queue depth: 32 00:14:40.023 Allocate depth: 32 00:14:40.023 # threads/core: 1 00:14:40.023 Run time: 1 seconds 00:14:40.023 Verify: Yes 00:14:40.023 00:14:40.023 Running for 1 seconds... 00:14:40.023 00:14:40.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:40.023 ------------------------------------------------------------------------------------ 00:14:40.023 0,0 4512/s 186 MiB/s 0 0 00:14:40.023 3,0 4512/s 186 MiB/s 0 0 00:14:40.023 2,0 4480/s 185 MiB/s 0 0 00:14:40.023 1,0 4512/s 186 MiB/s 0 0 00:14:40.023 ==================================================================================== 00:14:40.023 Total 18016/s 1911 MiB/s 0 0' 00:14:40.023 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.023 11:58:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:40.023 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.023 11:58:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:40.023 11:58:45 -- accel/accel.sh@12 -- # build_accel_config 00:14:40.023 11:58:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:40.023 11:58:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:40.023 11:58:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:40.023 11:58:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:40.023 11:58:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:40.023 11:58:45 -- accel/accel.sh@41 -- # local IFS=, 00:14:40.023 11:58:45 -- accel/accel.sh@42 -- # jq -r . 00:14:40.023 [2024-11-29 11:58:45.383820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:40.023 [2024-11-29 11:58:45.384292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119550 ] 00:14:40.281 [2024-11-29 11:58:45.548478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.281 [2024-11-29 11:58:45.630141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.281 [2024-11-29 11:58:45.630278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.281 [2024-11-29 11:58:45.630406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.281 [2024-11-29 11:58:45.631004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=0xf 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=decompress 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=software 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@23 -- # accel_module=software 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=32 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=32 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=1 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val=Yes 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:40.281 11:58:45 -- accel/accel.sh@21 -- # val= 00:14:40.281 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:14:40.281 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.655 11:58:46 -- accel/accel.sh@21 -- # val= 00:14:41.655 11:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # IFS=: 00:14:41.655 11:58:46 -- accel/accel.sh@20 -- # read -r var val 00:14:41.656 11:58:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:41.656 11:58:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:41.656 11:58:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:41.656 00:14:41.656 real 0m3.153s 00:14:41.656 user 0m9.679s 00:14:41.656 sys 0m0.366s 00:14:41.656 11:58:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:41.656 11:58:46 -- common/autotest_common.sh@10 -- # set +x 00:14:41.656 ************************************ 00:14:41.656 END TEST accel_decomp_full_mcore 00:14:41.656 ************************************ 00:14:41.656 11:58:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.656 11:58:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:41.656 11:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.656 11:58:47 -- common/autotest_common.sh@10 -- # set +x 00:14:41.656 ************************************ 00:14:41.656 START TEST accel_decomp_mthread 00:14:41.656 ************************************ 00:14:41.656 11:58:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.656 11:58:47 -- accel/accel.sh@16 -- # local accel_opc 00:14:41.656 11:58:47 -- accel/accel.sh@17 -- # local accel_module 00:14:41.656 11:58:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.656 11:58:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.656 11:58:47 -- accel/accel.sh@12 -- # build_accel_config 00:14:41.656 11:58:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:41.656 11:58:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:41.656 11:58:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:41.656 11:58:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:41.656 11:58:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:41.656 11:58:47 -- accel/accel.sh@41 -- # local IFS=, 00:14:41.656 11:58:47 -- accel/accel.sh@42 -- # jq -r . 00:14:41.656 [2024-11-29 11:58:47.045751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:41.656 [2024-11-29 11:58:47.046119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119592 ] 00:14:41.914 [2024-11-29 11:58:47.191690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.914 [2024-11-29 11:58:47.272023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.289 11:58:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:43.289 00:14:43.289 SPDK Configuration: 00:14:43.289 Core mask: 0x1 00:14:43.289 00:14:43.289 Accel Perf Configuration: 00:14:43.289 Workload Type: decompress 00:14:43.289 Transfer size: 4096 bytes 00:14:43.289 Vector count 1 00:14:43.289 Module: software 00:14:43.289 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:43.289 Queue depth: 32 00:14:43.289 Allocate depth: 32 00:14:43.289 # threads/core: 2 00:14:43.289 Run time: 1 seconds 00:14:43.289 Verify: Yes 00:14:43.289 00:14:43.289 Running for 1 seconds... 00:14:43.289 00:14:43.289 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:43.289 ------------------------------------------------------------------------------------ 00:14:43.289 0,1 27904/s 51 MiB/s 0 0 00:14:43.289 0,0 27744/s 51 MiB/s 0 0 00:14:43.289 ==================================================================================== 00:14:43.289 Total 55648/s 217 MiB/s 0 0' 00:14:43.289 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.289 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.289 11:58:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:43.289 11:58:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:43.289 11:58:48 -- accel/accel.sh@12 -- # build_accel_config 00:14:43.289 11:58:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:43.289 11:58:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:43.289 11:58:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:43.289 11:58:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:43.289 11:58:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:43.289 11:58:48 -- accel/accel.sh@41 -- # local IFS=, 00:14:43.289 11:58:48 -- accel/accel.sh@42 -- # jq -r . 00:14:43.289 [2024-11-29 11:58:48.588108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:43.289 [2024-11-29 11:58:48.588557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119622 ] 00:14:43.289 [2024-11-29 11:58:48.735261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.548 [2024-11-29 11:58:48.815573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val=0x1 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val=decompress 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.548 11:58:48 -- accel/accel.sh@21 -- # val=software 00:14:43.548 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.548 11:58:48 -- accel/accel.sh@23 -- # accel_module=software 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.548 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val=32 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val=32 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val=2 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val=Yes 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:43.549 11:58:48 -- accel/accel.sh@21 -- # val= 00:14:43.549 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:14:43.549 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 11:58:50 -- accel/accel.sh@21 -- # val= 00:14:44.925 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:14:44.925 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:14:44.925 ************************************ 00:14:44.925 END TEST accel_decomp_mthread 00:14:44.925 ************************************ 00:14:44.925 11:58:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:44.925 11:58:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:44.925 11:58:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:44.925 00:14:44.925 real 0m3.084s 00:14:44.925 user 0m2.621s 00:14:44.925 sys 0m0.307s 00:14:44.925 11:58:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:44.925 11:58:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.925 11:58:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:44.925 11:58:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:44.925 11:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:44.925 11:58:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.925 ************************************ 00:14:44.925 START TEST accel_deomp_full_mthread 00:14:44.925 ************************************ 00:14:44.925 11:58:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:44.925 11:58:50 -- accel/accel.sh@16 -- # local accel_opc 00:14:44.925 11:58:50 -- accel/accel.sh@17 -- # local accel_module 00:14:44.925 11:58:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:44.925 11:58:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:44.925 11:58:50 -- accel/accel.sh@12 -- # build_accel_config 00:14:44.925 11:58:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:44.925 11:58:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:44.925 11:58:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:44.925 11:58:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:44.925 11:58:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:44.925 11:58:50 -- accel/accel.sh@41 -- # local IFS=, 00:14:44.925 11:58:50 -- accel/accel.sh@42 -- # jq -r . 00:14:44.925 [2024-11-29 11:58:50.174004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:44.925 [2024-11-29 11:58:50.174803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119661 ] 00:14:44.925 [2024-11-29 11:58:50.315407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.925 [2024-11-29 11:58:50.390363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.321 11:58:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:46.321 00:14:46.321 SPDK Configuration: 00:14:46.321 Core mask: 0x1 00:14:46.321 00:14:46.321 Accel Perf Configuration: 00:14:46.321 Workload Type: decompress 00:14:46.321 Transfer size: 111250 bytes 00:14:46.321 Vector count 1 00:14:46.321 Module: software 00:14:46.321 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:46.321 Queue depth: 32 00:14:46.321 Allocate depth: 32 00:14:46.321 # threads/core: 2 00:14:46.321 Run time: 1 seconds 00:14:46.321 Verify: Yes 00:14:46.321 00:14:46.321 Running for 1 seconds... 00:14:46.321 00:14:46.321 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:46.321 ------------------------------------------------------------------------------------ 00:14:46.321 0,1 2368/s 97 MiB/s 0 0 00:14:46.321 0,0 2304/s 95 MiB/s 0 0 00:14:46.321 ==================================================================================== 00:14:46.321 Total 4672/s 495 MiB/s 0 0' 00:14:46.321 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:14:46.321 11:58:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:46.321 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:14:46.321 11:58:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:46.321 11:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:14:46.321 11:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:46.321 11:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:46.321 11:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:46.321 11:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:46.321 11:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:46.321 11:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:14:46.321 11:58:51 -- accel/accel.sh@42 -- # jq -r . 00:14:46.321 [2024-11-29 11:58:51.717061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:46.321 [2024-11-29 11:58:51.717638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119690 ] 00:14:46.580 [2024-11-29 11:58:51.880224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.580 [2024-11-29 11:58:51.974930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=0x1 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=decompress 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=software 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@23 -- # accel_module=software 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=32 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=32 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=2 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val=Yes 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:46.580 11:58:52 -- accel/accel.sh@21 -- # val= 00:14:46.580 11:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:14:46.580 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@21 -- # val= 00:14:47.954 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:14:47.954 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:14:47.954 11:58:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:47.954 11:58:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:47.954 11:58:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:47.954 00:14:47.954 real 0m3.141s 00:14:47.954 user 0m2.701s 00:14:47.954 sys 0m0.293s 00:14:47.954 11:58:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:47.954 11:58:53 -- common/autotest_common.sh@10 -- # set +x 00:14:47.954 ************************************ 00:14:47.954 END TEST accel_deomp_full_mthread 00:14:47.954 ************************************ 00:14:47.954 11:58:53 -- accel/accel.sh@116 -- # [[ n == y ]] 00:14:47.954 11:58:53 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:47.954 11:58:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:47.954 11:58:53 -- accel/accel.sh@129 -- # build_accel_config 00:14:47.954 11:58:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.954 11:58:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:47.954 11:58:53 -- common/autotest_common.sh@10 -- # set +x 00:14:47.954 11:58:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:47.954 11:58:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:47.954 11:58:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:47.954 11:58:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:47.954 11:58:53 -- accel/accel.sh@41 -- # local IFS=, 00:14:47.954 11:58:53 -- accel/accel.sh@42 -- # jq -r . 00:14:47.954 ************************************ 00:14:47.954 START TEST accel_dif_functional_tests 00:14:47.954 ************************************ 00:14:47.954 11:58:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:47.954 [2024-11-29 11:58:53.404785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:47.954 [2024-11-29 11:58:53.405248] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119725 ] 00:14:48.211 [2024-11-29 11:58:53.560938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:48.211 [2024-11-29 11:58:53.625435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.211 [2024-11-29 11:58:53.625544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.211 [2024-11-29 11:58:53.625550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.211 00:14:48.211 00:14:48.211 CUnit - A unit testing framework for C - Version 2.1-3 00:14:48.211 http://cunit.sourceforge.net/ 00:14:48.211 00:14:48.211 00:14:48.211 Suite: accel_dif 00:14:48.211 Test: verify: DIF generated, GUARD check ...passed 00:14:48.211 Test: verify: DIF generated, APPTAG check ...passed 00:14:48.211 Test: verify: DIF generated, REFTAG check ...passed 00:14:48.211 Test: verify: DIF not generated, GUARD check ...[2024-11-29 11:58:53.715605] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:48.211 [2024-11-29 11:58:53.716280] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:48.211 passed 00:14:48.211 Test: verify: DIF not generated, APPTAG check ...[2024-11-29 11:58:53.716826] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:48.211 [2024-11-29 11:58:53.717183] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:48.211 passed 00:14:48.211 Test: verify: DIF not generated, REFTAG check ...[2024-11-29 11:58:53.717681] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:48.211 [2024-11-29 11:58:53.718013] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:48.211 passed 00:14:48.211 Test: verify: APPTAG correct, APPTAG check ...passed 00:14:48.211 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-29 11:58:53.718803] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:14:48.211 passed 00:14:48.211 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:14:48.211 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:14:48.211 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:14:48.211 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-29 11:58:53.720007] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:14:48.211 passed 00:14:48.211 Test: generate copy: DIF generated, GUARD check ...passed 00:14:48.211 Test: generate copy: DIF generated, APTTAG check ...passed 00:14:48.211 Test: generate copy: DIF generated, REFTAG check ...passed 00:14:48.211 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:14:48.211 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:14:48.211 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:14:48.211 Test: generate copy: iovecs-len validate ...[2024-11-29 11:58:53.721900] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:14:48.211 passed 00:14:48.468 Test: generate copy: buffer alignment validate ...passed 00:14:48.468 00:14:48.468 Run Summary: Type Total Ran Passed Failed Inactive 00:14:48.468 suites 1 1 n/a 0 0 00:14:48.468 tests 20 20 20 0 0 00:14:48.468 asserts 204 204 204 0 n/a 00:14:48.468 00:14:48.468 Elapsed time = 0.017 seconds 00:14:48.468 ************************************ 00:14:48.468 END TEST accel_dif_functional_tests 00:14:48.468 ************************************ 00:14:48.468 00:14:48.468 real 0m0.635s 00:14:48.468 user 0m0.793s 00:14:48.468 sys 0m0.224s 00:14:48.468 11:58:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:48.469 11:58:53 -- common/autotest_common.sh@10 -- # set +x 00:14:48.726 ************************************ 00:14:48.726 END TEST accel 00:14:48.726 ************************************ 00:14:48.726 00:14:48.726 real 1m6.661s 00:14:48.726 user 1m10.346s 00:14:48.726 sys 0m7.820s 00:14:48.726 11:58:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:48.726 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:14:48.726 11:58:54 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:48.726 11:58:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:48.726 11:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.726 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:14:48.726 ************************************ 00:14:48.726 START TEST accel_rpc 00:14:48.726 ************************************ 00:14:48.726 11:58:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:48.726 * Looking for test storage... 00:14:48.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:14:48.726 11:58:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:48.726 11:58:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:48.726 11:58:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:48.726 11:58:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:48.726 11:58:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:48.726 11:58:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:48.726 11:58:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:48.726 11:58:54 -- scripts/common.sh@335 -- # IFS=.-: 00:14:48.726 11:58:54 -- scripts/common.sh@335 -- # read -ra ver1 00:14:48.726 11:58:54 -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.726 11:58:54 -- scripts/common.sh@336 -- # read -ra ver2 00:14:48.726 11:58:54 -- scripts/common.sh@337 -- # local 'op=<' 00:14:48.726 11:58:54 -- scripts/common.sh@339 -- # ver1_l=2 00:14:48.726 11:58:54 -- scripts/common.sh@340 -- # ver2_l=1 00:14:48.726 11:58:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:48.726 11:58:54 -- scripts/common.sh@343 -- # case "$op" in 00:14:48.726 11:58:54 -- scripts/common.sh@344 -- # : 1 00:14:48.726 11:58:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:48.726 11:58:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.726 11:58:54 -- scripts/common.sh@364 -- # decimal 1 00:14:48.726 11:58:54 -- scripts/common.sh@352 -- # local d=1 00:14:48.726 11:58:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.726 11:58:54 -- scripts/common.sh@354 -- # echo 1 00:14:48.726 11:58:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:48.726 11:58:54 -- scripts/common.sh@365 -- # decimal 2 00:14:48.726 11:58:54 -- scripts/common.sh@352 -- # local d=2 00:14:48.726 11:58:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.726 11:58:54 -- scripts/common.sh@354 -- # echo 2 00:14:48.984 11:58:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:48.984 11:58:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:48.984 11:58:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:48.984 11:58:54 -- scripts/common.sh@367 -- # return 0 00:14:48.984 11:58:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.984 11:58:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.984 --rc genhtml_branch_coverage=1 00:14:48.984 --rc genhtml_function_coverage=1 00:14:48.984 --rc genhtml_legend=1 00:14:48.984 --rc geninfo_all_blocks=1 00:14:48.984 --rc geninfo_unexecuted_blocks=1 00:14:48.984 00:14:48.984 ' 00:14:48.984 11:58:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.984 --rc genhtml_branch_coverage=1 00:14:48.984 --rc genhtml_function_coverage=1 00:14:48.984 --rc genhtml_legend=1 00:14:48.984 --rc geninfo_all_blocks=1 00:14:48.984 --rc geninfo_unexecuted_blocks=1 00:14:48.984 00:14:48.984 ' 00:14:48.984 11:58:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.984 --rc genhtml_branch_coverage=1 00:14:48.984 --rc genhtml_function_coverage=1 00:14:48.984 --rc genhtml_legend=1 00:14:48.984 --rc geninfo_all_blocks=1 00:14:48.984 --rc geninfo_unexecuted_blocks=1 00:14:48.985 00:14:48.985 ' 00:14:48.985 11:58:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:48.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.985 --rc genhtml_branch_coverage=1 00:14:48.985 --rc genhtml_function_coverage=1 00:14:48.985 --rc genhtml_legend=1 00:14:48.985 --rc geninfo_all_blocks=1 00:14:48.985 --rc geninfo_unexecuted_blocks=1 00:14:48.985 00:14:48.985 ' 00:14:48.985 11:58:54 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:48.985 11:58:54 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=119813 00:14:48.985 11:58:54 -- accel/accel_rpc.sh@15 -- # waitforlisten 119813 00:14:48.985 11:58:54 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:48.985 11:58:54 -- common/autotest_common.sh@829 -- # '[' -z 119813 ']' 00:14:48.985 11:58:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.985 11:58:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.985 11:58:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.985 11:58:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.985 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:14:48.985 [2024-11-29 11:58:54.306552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:48.985 [2024-11-29 11:58:54.307122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119813 ] 00:14:48.985 [2024-11-29 11:58:54.460704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.243 [2024-11-29 11:58:54.555819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.243 [2024-11-29 11:58:54.556780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.809 11:58:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.809 11:58:55 -- common/autotest_common.sh@862 -- # return 0 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:14:49.809 11:58:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:49.809 11:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.809 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:49.809 ************************************ 00:14:49.809 START TEST accel_assign_opcode 00:14:49.809 ************************************ 00:14:49.809 11:58:55 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:14:49.809 11:58:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.809 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:49.809 [2024-11-29 11:58:55.257943] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:14:49.809 11:58:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.809 11:58:55 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:14:49.809 11:58:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.810 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:49.810 [2024-11-29 11:58:55.265934] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:14:49.810 11:58:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.810 11:58:55 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:14:49.810 11:58:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.810 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:50.067 11:58:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.067 11:58:55 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:14:50.067 11:58:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.067 11:58:55 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:14:50.067 11:58:55 -- accel/accel_rpc.sh@42 -- # grep software 00:14:50.067 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:50.067 11:58:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.067 software 00:14:50.067 00:14:50.067 real 0m0.302s 00:14:50.067 user 0m0.051s 00:14:50.067 sys 0m0.008s 00:14:50.067 11:58:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:50.067 ************************************ 00:14:50.067 END TEST accel_assign_opcode 00:14:50.067 ************************************ 00:14:50.067 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 11:58:55 -- accel/accel_rpc.sh@55 -- # killprocess 119813 00:14:50.326 11:58:55 -- common/autotest_common.sh@936 -- # '[' -z 119813 ']' 00:14:50.326 11:58:55 -- common/autotest_common.sh@940 -- # kill -0 119813 00:14:50.326 11:58:55 -- common/autotest_common.sh@941 -- # uname 00:14:50.326 11:58:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.326 11:58:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119813 00:14:50.326 killing process with pid 119813 00:14:50.326 11:58:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.326 11:58:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.326 11:58:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119813' 00:14:50.326 11:58:55 -- common/autotest_common.sh@955 -- # kill 119813 00:14:50.326 11:58:55 -- common/autotest_common.sh@960 -- # wait 119813 00:14:50.892 00:14:50.892 real 0m2.039s 00:14:50.892 user 0m2.031s 00:14:50.892 sys 0m0.516s 00:14:50.892 11:58:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:50.892 ************************************ 00:14:50.892 END TEST accel_rpc 00:14:50.892 ************************************ 00:14:50.892 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.892 11:58:56 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:50.892 11:58:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:50.892 11:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.892 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.892 ************************************ 00:14:50.892 START TEST app_cmdline 00:14:50.892 ************************************ 00:14:50.892 11:58:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:50.892 * Looking for test storage... 00:14:50.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:50.892 11:58:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:50.892 11:58:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:50.892 11:58:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:50.892 11:58:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:50.892 11:58:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:50.892 11:58:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:50.892 11:58:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:50.892 11:58:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:50.892 11:58:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:50.892 11:58:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.892 11:58:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:50.892 11:58:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:50.892 11:58:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:50.892 11:58:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:50.892 11:58:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:50.892 11:58:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:50.892 11:58:56 -- scripts/common.sh@344 -- # : 1 00:14:50.892 11:58:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:50.892 11:58:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.892 11:58:56 -- scripts/common.sh@364 -- # decimal 1 00:14:50.892 11:58:56 -- scripts/common.sh@352 -- # local d=1 00:14:50.892 11:58:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.892 11:58:56 -- scripts/common.sh@354 -- # echo 1 00:14:50.892 11:58:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:50.892 11:58:56 -- scripts/common.sh@365 -- # decimal 2 00:14:50.892 11:58:56 -- scripts/common.sh@352 -- # local d=2 00:14:50.892 11:58:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.892 11:58:56 -- scripts/common.sh@354 -- # echo 2 00:14:50.892 11:58:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:50.892 11:58:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:50.892 11:58:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:50.892 11:58:56 -- scripts/common.sh@367 -- # return 0 00:14:50.892 11:58:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.892 11:58:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.892 --rc genhtml_branch_coverage=1 00:14:50.892 --rc genhtml_function_coverage=1 00:14:50.892 --rc genhtml_legend=1 00:14:50.892 --rc geninfo_all_blocks=1 00:14:50.892 --rc geninfo_unexecuted_blocks=1 00:14:50.892 00:14:50.892 ' 00:14:50.892 11:58:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.892 --rc genhtml_branch_coverage=1 00:14:50.892 --rc genhtml_function_coverage=1 00:14:50.892 --rc genhtml_legend=1 00:14:50.892 --rc geninfo_all_blocks=1 00:14:50.892 --rc geninfo_unexecuted_blocks=1 00:14:50.892 00:14:50.892 ' 00:14:50.892 11:58:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.892 --rc genhtml_branch_coverage=1 00:14:50.892 --rc genhtml_function_coverage=1 00:14:50.892 --rc genhtml_legend=1 00:14:50.892 --rc geninfo_all_blocks=1 00:14:50.892 --rc geninfo_unexecuted_blocks=1 00:14:50.892 00:14:50.892 ' 00:14:50.892 11:58:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.892 --rc genhtml_branch_coverage=1 00:14:50.892 --rc genhtml_function_coverage=1 00:14:50.892 --rc genhtml_legend=1 00:14:50.892 --rc geninfo_all_blocks=1 00:14:50.892 --rc geninfo_unexecuted_blocks=1 00:14:50.892 00:14:50.892 ' 00:14:50.892 11:58:56 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:14:50.892 11:58:56 -- app/cmdline.sh@17 -- # spdk_tgt_pid=119929 00:14:50.892 11:58:56 -- app/cmdline.sh@18 -- # waitforlisten 119929 00:14:50.892 11:58:56 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:14:50.892 11:58:56 -- common/autotest_common.sh@829 -- # '[' -z 119929 ']' 00:14:50.892 11:58:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.892 11:58:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.892 11:58:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.892 11:58:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.892 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.892 [2024-11-29 11:58:56.366068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:50.893 [2024-11-29 11:58:56.366363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119929 ] 00:14:51.150 [2024-11-29 11:58:56.514726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.150 [2024-11-29 11:58:56.595667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:51.150 [2024-11-29 11:58:56.595912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.093 11:58:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.093 11:58:57 -- common/autotest_common.sh@862 -- # return 0 00:14:52.093 11:58:57 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:14:52.093 { 00:14:52.093 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:14:52.093 "fields": { 00:14:52.093 "major": 24, 00:14:52.093 "minor": 1, 00:14:52.093 "patch": 1, 00:14:52.093 "suffix": "-pre", 00:14:52.093 "commit": "c13c99a5e" 00:14:52.093 } 00:14:52.093 } 00:14:52.093 11:58:57 -- app/cmdline.sh@22 -- # expected_methods=() 00:14:52.093 11:58:57 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:14:52.093 11:58:57 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:14:52.093 11:58:57 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:14:52.093 11:58:57 -- app/cmdline.sh@26 -- # sort 00:14:52.093 11:58:57 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:14:52.093 11:58:57 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:14:52.093 11:58:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.093 11:58:57 -- common/autotest_common.sh@10 -- # set +x 00:14:52.093 11:58:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.368 11:58:57 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:14:52.368 11:58:57 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:14:52.368 11:58:57 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:52.368 11:58:57 -- common/autotest_common.sh@650 -- # local es=0 00:14:52.368 11:58:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:52.368 11:58:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.368 11:58:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.368 11:58:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.368 11:58:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.368 11:58:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.368 11:58:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.368 11:58:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.368 11:58:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:52.368 11:58:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:52.368 request: 00:14:52.368 { 00:14:52.368 "method": "env_dpdk_get_mem_stats", 00:14:52.368 "req_id": 1 00:14:52.368 } 00:14:52.368 Got JSON-RPC error response 00:14:52.368 response: 00:14:52.368 { 00:14:52.368 "code": -32601, 00:14:52.368 "message": "Method not found" 00:14:52.368 } 00:14:52.625 11:58:57 -- common/autotest_common.sh@653 -- # es=1 00:14:52.625 11:58:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.625 11:58:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.625 11:58:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.625 11:58:57 -- app/cmdline.sh@1 -- # killprocess 119929 00:14:52.625 11:58:57 -- common/autotest_common.sh@936 -- # '[' -z 119929 ']' 00:14:52.625 11:58:57 -- common/autotest_common.sh@940 -- # kill -0 119929 00:14:52.625 11:58:57 -- common/autotest_common.sh@941 -- # uname 00:14:52.625 11:58:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.625 11:58:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119929 00:14:52.625 11:58:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.625 11:58:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.625 killing process with pid 119929 00:14:52.625 11:58:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119929' 00:14:52.625 11:58:57 -- common/autotest_common.sh@955 -- # kill 119929 00:14:52.625 11:58:57 -- common/autotest_common.sh@960 -- # wait 119929 00:14:52.884 00:14:52.884 real 0m2.221s 00:14:52.884 user 0m2.682s 00:14:52.884 sys 0m0.515s 00:14:52.884 ************************************ 00:14:52.884 END TEST app_cmdline 00:14:52.884 ************************************ 00:14:52.884 11:58:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.884 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.141 11:58:58 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:53.141 11:58:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.141 11:58:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.141 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.141 ************************************ 00:14:53.141 START TEST version 00:14:53.141 ************************************ 00:14:53.141 11:58:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:53.141 * Looking for test storage... 00:14:53.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:53.141 11:58:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.141 11:58:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.141 11:58:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.141 11:58:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.141 11:58:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.141 11:58:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.141 11:58:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.141 11:58:58 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.141 11:58:58 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.141 11:58:58 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.142 11:58:58 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.142 11:58:58 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.142 11:58:58 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.142 11:58:58 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.142 11:58:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.142 11:58:58 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.142 11:58:58 -- scripts/common.sh@344 -- # : 1 00:14:53.142 11:58:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.142 11:58:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.142 11:58:58 -- scripts/common.sh@364 -- # decimal 1 00:14:53.142 11:58:58 -- scripts/common.sh@352 -- # local d=1 00:14:53.142 11:58:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.142 11:58:58 -- scripts/common.sh@354 -- # echo 1 00:14:53.142 11:58:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.142 11:58:58 -- scripts/common.sh@365 -- # decimal 2 00:14:53.142 11:58:58 -- scripts/common.sh@352 -- # local d=2 00:14:53.142 11:58:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.142 11:58:58 -- scripts/common.sh@354 -- # echo 2 00:14:53.142 11:58:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.142 11:58:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.142 11:58:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.142 11:58:58 -- scripts/common.sh@367 -- # return 0 00:14:53.142 11:58:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.142 11:58:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.142 --rc genhtml_branch_coverage=1 00:14:53.142 --rc genhtml_function_coverage=1 00:14:53.142 --rc genhtml_legend=1 00:14:53.142 --rc geninfo_all_blocks=1 00:14:53.142 --rc geninfo_unexecuted_blocks=1 00:14:53.142 00:14:53.142 ' 00:14:53.142 11:58:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.142 --rc genhtml_branch_coverage=1 00:14:53.142 --rc genhtml_function_coverage=1 00:14:53.142 --rc genhtml_legend=1 00:14:53.142 --rc geninfo_all_blocks=1 00:14:53.142 --rc geninfo_unexecuted_blocks=1 00:14:53.142 00:14:53.142 ' 00:14:53.142 11:58:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.142 --rc genhtml_branch_coverage=1 00:14:53.142 --rc genhtml_function_coverage=1 00:14:53.142 --rc genhtml_legend=1 00:14:53.142 --rc geninfo_all_blocks=1 00:14:53.142 --rc geninfo_unexecuted_blocks=1 00:14:53.142 00:14:53.142 ' 00:14:53.142 11:58:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.142 --rc genhtml_branch_coverage=1 00:14:53.142 --rc genhtml_function_coverage=1 00:14:53.142 --rc genhtml_legend=1 00:14:53.142 --rc geninfo_all_blocks=1 00:14:53.142 --rc geninfo_unexecuted_blocks=1 00:14:53.142 00:14:53.142 ' 00:14:53.142 11:58:58 -- app/version.sh@17 -- # get_header_version major 00:14:53.142 11:58:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:53.142 11:58:58 -- app/version.sh@14 -- # cut -f2 00:14:53.142 11:58:58 -- app/version.sh@14 -- # tr -d '"' 00:14:53.142 11:58:58 -- app/version.sh@17 -- # major=24 00:14:53.142 11:58:58 -- app/version.sh@18 -- # get_header_version minor 00:14:53.142 11:58:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:53.142 11:58:58 -- app/version.sh@14 -- # cut -f2 00:14:53.142 11:58:58 -- app/version.sh@14 -- # tr -d '"' 00:14:53.142 11:58:58 -- app/version.sh@18 -- # minor=1 00:14:53.142 11:58:58 -- app/version.sh@19 -- # get_header_version patch 00:14:53.142 11:58:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:53.142 11:58:58 -- app/version.sh@14 -- # cut -f2 00:14:53.142 11:58:58 -- app/version.sh@14 -- # tr -d '"' 00:14:53.142 11:58:58 -- app/version.sh@19 -- # patch=1 00:14:53.142 11:58:58 -- app/version.sh@20 -- # get_header_version suffix 00:14:53.142 11:58:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:53.142 11:58:58 -- app/version.sh@14 -- # cut -f2 00:14:53.142 11:58:58 -- app/version.sh@14 -- # tr -d '"' 00:14:53.142 11:58:58 -- app/version.sh@20 -- # suffix=-pre 00:14:53.142 11:58:58 -- app/version.sh@22 -- # version=24.1 00:14:53.142 11:58:58 -- app/version.sh@25 -- # (( patch != 0 )) 00:14:53.142 11:58:58 -- app/version.sh@25 -- # version=24.1.1 00:14:53.142 11:58:58 -- app/version.sh@28 -- # version=24.1.1rc0 00:14:53.142 11:58:58 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:53.142 11:58:58 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:14:53.400 11:58:58 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:14:53.400 11:58:58 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:14:53.400 00:14:53.400 real 0m0.243s 00:14:53.400 user 0m0.204s 00:14:53.400 sys 0m0.079s 00:14:53.400 11:58:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.400 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.400 ************************************ 00:14:53.400 END TEST version 00:14:53.400 ************************************ 00:14:53.400 11:58:58 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:14:53.400 11:58:58 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:14:53.400 11:58:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.400 11:58:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.400 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.400 ************************************ 00:14:53.400 START TEST blockdev_general 00:14:53.400 ************************************ 00:14:53.400 11:58:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:14:53.400 * Looking for test storage... 00:14:53.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:53.400 11:58:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.400 11:58:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.400 11:58:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.400 11:58:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.400 11:58:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.400 11:58:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.400 11:58:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.400 11:58:58 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.400 11:58:58 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.400 11:58:58 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.400 11:58:58 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.400 11:58:58 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.400 11:58:58 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.401 11:58:58 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.401 11:58:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.401 11:58:58 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.401 11:58:58 -- scripts/common.sh@344 -- # : 1 00:14:53.401 11:58:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.401 11:58:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.401 11:58:58 -- scripts/common.sh@364 -- # decimal 1 00:14:53.401 11:58:58 -- scripts/common.sh@352 -- # local d=1 00:14:53.401 11:58:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.401 11:58:58 -- scripts/common.sh@354 -- # echo 1 00:14:53.401 11:58:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.401 11:58:58 -- scripts/common.sh@365 -- # decimal 2 00:14:53.401 11:58:58 -- scripts/common.sh@352 -- # local d=2 00:14:53.401 11:58:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.401 11:58:58 -- scripts/common.sh@354 -- # echo 2 00:14:53.401 11:58:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.401 11:58:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.401 11:58:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.401 11:58:58 -- scripts/common.sh@367 -- # return 0 00:14:53.401 11:58:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.401 11:58:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.401 --rc genhtml_branch_coverage=1 00:14:53.401 --rc genhtml_function_coverage=1 00:14:53.401 --rc genhtml_legend=1 00:14:53.401 --rc geninfo_all_blocks=1 00:14:53.401 --rc geninfo_unexecuted_blocks=1 00:14:53.401 00:14:53.401 ' 00:14:53.401 11:58:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.401 --rc genhtml_branch_coverage=1 00:14:53.401 --rc genhtml_function_coverage=1 00:14:53.401 --rc genhtml_legend=1 00:14:53.401 --rc geninfo_all_blocks=1 00:14:53.401 --rc geninfo_unexecuted_blocks=1 00:14:53.401 00:14:53.401 ' 00:14:53.401 11:58:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.401 --rc genhtml_branch_coverage=1 00:14:53.401 --rc genhtml_function_coverage=1 00:14:53.401 --rc genhtml_legend=1 00:14:53.401 --rc geninfo_all_blocks=1 00:14:53.401 --rc geninfo_unexecuted_blocks=1 00:14:53.401 00:14:53.401 ' 00:14:53.401 11:58:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.401 --rc genhtml_branch_coverage=1 00:14:53.401 --rc genhtml_function_coverage=1 00:14:53.401 --rc genhtml_legend=1 00:14:53.401 --rc geninfo_all_blocks=1 00:14:53.401 --rc geninfo_unexecuted_blocks=1 00:14:53.401 00:14:53.401 ' 00:14:53.401 11:58:58 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:53.401 11:58:58 -- bdev/nbd_common.sh@6 -- # set -e 00:14:53.401 11:58:58 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:53.401 11:58:58 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:53.401 11:58:58 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:53.401 11:58:58 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:53.401 11:58:58 -- bdev/blockdev.sh@18 -- # : 00:14:53.401 11:58:58 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:14:53.401 11:58:58 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:14:53.401 11:58:58 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:14:53.401 11:58:58 -- bdev/blockdev.sh@672 -- # uname -s 00:14:53.401 11:58:58 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:14:53.401 11:58:58 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:14:53.401 11:58:58 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:14:53.401 11:58:58 -- bdev/blockdev.sh@681 -- # crypto_device= 00:14:53.401 11:58:58 -- bdev/blockdev.sh@682 -- # dek= 00:14:53.401 11:58:58 -- bdev/blockdev.sh@683 -- # env_ctx= 00:14:53.401 11:58:58 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:14:53.401 11:58:58 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:14:53.401 11:58:58 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:14:53.401 11:58:58 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:14:53.401 11:58:58 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:14:53.401 11:58:58 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=120113 00:14:53.401 11:58:58 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:53.401 11:58:58 -- bdev/blockdev.sh@47 -- # waitforlisten 120113 00:14:53.401 11:58:58 -- common/autotest_common.sh@829 -- # '[' -z 120113 ']' 00:14:53.401 11:58:58 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:14:53.401 11:58:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.401 11:58:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.401 11:58:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.401 11:58:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.401 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.660 [2024-11-29 11:58:58.958760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:53.660 [2024-11-29 11:58:58.959023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120113 ] 00:14:53.660 [2024-11-29 11:58:59.107607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.919 [2024-11-29 11:58:59.192323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.919 [2024-11-29 11:58:59.192666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.487 11:58:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.487 11:58:59 -- common/autotest_common.sh@862 -- # return 0 00:14:54.487 11:58:59 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:14:54.487 11:58:59 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:14:54.487 11:58:59 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:14:54.487 11:58:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.487 11:58:59 -- common/autotest_common.sh@10 -- # set +x 00:14:54.746 [2024-11-29 11:59:00.232259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:54.746 [2024-11-29 11:59:00.232403] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:54.746 00:14:54.746 [2024-11-29 11:59:00.240160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:54.746 [2024-11-29 11:59:00.240286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:54.746 00:14:55.005 Malloc0 00:14:55.005 Malloc1 00:14:55.005 Malloc2 00:14:55.005 Malloc3 00:14:55.005 Malloc4 00:14:55.005 Malloc5 00:14:55.005 Malloc6 00:14:55.005 Malloc7 00:14:55.005 Malloc8 00:14:55.005 Malloc9 00:14:55.005 [2024-11-29 11:59:00.447288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:55.005 [2024-11-29 11:59:00.447415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.005 [2024-11-29 11:59:00.447463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:55.005 [2024-11-29 11:59:00.447507] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.005 [2024-11-29 11:59:00.450499] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.005 [2024-11-29 11:59:00.450605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:55.005 TestPT 00:14:55.005 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.005 11:59:00 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:14:55.291 5000+0 records in 00:14:55.291 5000+0 records out 00:14:55.291 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0282101 s, 363 MB/s 00:14:55.291 11:59:00 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:14:55.291 11:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.291 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 AIO0 00:14:55.291 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.291 11:59:00 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:14:55.291 11:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.291 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.291 11:59:00 -- bdev/blockdev.sh@738 -- # cat 00:14:55.291 11:59:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:14:55.291 11:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.291 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.291 11:59:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:14:55.291 11:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.291 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.291 11:59:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:55.291 11:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.291 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.291 11:59:00 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:14:55.291 11:59:00 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:14:55.291 11:59:00 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:14:55.291 11:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.291 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 11:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.291 11:59:00 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:14:55.291 11:59:00 -- bdev/blockdev.sh@747 -- # jq -r .name 00:14:55.293 11:59:00 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "84513506-4942-474c-90e2-e8cb4c7a0c0b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "84513506-4942-474c-90e2-e8cb4c7a0c0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d573de8c-ab6c-5ccf-866b-fb1722633a6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d573de8c-ab6c-5ccf-866b-fb1722633a6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ca81c438-abe5-5726-8bbb-a219d3ba5991"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ca81c438-abe5-5726-8bbb-a219d3ba5991",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a954964b-bddb-5225-9a66-67afc2d6ad0d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a954964b-bddb-5225-9a66-67afc2d6ad0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "43352302-8556-5220-b3a3-524eea111706"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "43352302-8556-5220-b3a3-524eea111706",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "63568ebe-73c3-5dc0-987a-eee30ca32dd1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "63568ebe-73c3-5dc0-987a-eee30ca32dd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "56be882d-5264-5ff6-9f94-b57d6413ba7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56be882d-5264-5ff6-9f94-b57d6413ba7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cfaadef2-4e67-5c48-9618-d7167188174a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfaadef2-4e67-5c48-9618-d7167188174a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "9d2dd6b6-402f-5c20-913f-29300130f9f8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d2dd6b6-402f-5c20-913f-29300130f9f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "73384634-feaf-5416-82ce-dcc6389f1b52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "73384634-feaf-5416-82ce-dcc6389f1b52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "10ac1d90-abc3-55b0-ac5a-3d0da1e003d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10ac1d90-abc3-55b0-ac5a-3d0da1e003d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a24ebb49-38d3-53e0-8511-84a240bea5a4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a24ebb49-38d3-53e0-8511-84a240bea5a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "989732c5-784e-44b4-8bad-2d2228f4ed25",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "5829174b-fdb8-4339-8657-affcacca4d06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "8afa122a-ffac-4d9f-a813-dfea4dded088"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8afa122a-ffac-4d9f-a813-dfea4dded088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8afa122a-ffac-4d9f-a813-dfea4dded088",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "dfec51ab-7035-447e-8cdc-a2f0a3250877",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f3b40517-97fc-420d-b3ad-206b8c81b058",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "da6e5b36-1ff7-45bf-8d5a-afcb42524f12"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "da6e5b36-1ff7-45bf-8d5a-afcb42524f12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "da6e5b36-1ff7-45bf-8d5a-afcb42524f12",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "79a2589d-fdec-4786-bf79-a13206608e80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "32b07c2c-090c-4ed8-8b63-f6e0ffc46861",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "33e61c46-46d4-49b5-a4d4-4367a84abc64"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "33e61c46-46d4-49b5-a4d4-4367a84abc64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:14:55.293 11:59:00 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:14:55.293 11:59:00 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:14:55.293 11:59:00 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:14:55.293 11:59:00 -- bdev/blockdev.sh@752 -- # killprocess 120113 00:14:55.293 11:59:00 -- common/autotest_common.sh@936 -- # '[' -z 120113 ']' 00:14:55.293 11:59:00 -- common/autotest_common.sh@940 -- # kill -0 120113 00:14:55.293 11:59:00 -- common/autotest_common.sh@941 -- # uname 00:14:55.293 11:59:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.293 11:59:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120113 00:14:55.293 11:59:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:55.293 killing process with pid 120113 00:14:55.293 11:59:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:55.293 11:59:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120113' 00:14:55.293 11:59:00 -- common/autotest_common.sh@955 -- # kill 120113 00:14:55.293 11:59:00 -- common/autotest_common.sh@960 -- # wait 120113 00:14:56.226 11:59:01 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:56.226 11:59:01 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:14:56.226 11:59:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:56.226 11:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.226 11:59:01 -- common/autotest_common.sh@10 -- # set +x 00:14:56.226 ************************************ 00:14:56.226 START TEST bdev_hello_world 00:14:56.226 ************************************ 00:14:56.226 11:59:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:14:56.226 [2024-11-29 11:59:01.485338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:56.226 [2024-11-29 11:59:01.485586] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120166 ] 00:14:56.226 [2024-11-29 11:59:01.630739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.226 [2024-11-29 11:59:01.712908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.484 [2024-11-29 11:59:01.871028] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:56.484 [2024-11-29 11:59:01.871162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:56.484 [2024-11-29 11:59:01.878948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:56.484 [2024-11-29 11:59:01.879062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:56.484 [2024-11-29 11:59:01.887023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:56.484 [2024-11-29 11:59:01.887131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:56.485 [2024-11-29 11:59:01.887183] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:56.485 [2024-11-29 11:59:01.990345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:56.485 [2024-11-29 11:59:01.990485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.485 [2024-11-29 11:59:01.990554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.485 [2024-11-29 11:59:01.990600] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.485 [2024-11-29 11:59:01.993546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.485 [2024-11-29 11:59:01.993661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:56.742 [2024-11-29 11:59:02.180284] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:56.742 [2024-11-29 11:59:02.180407] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:14:56.742 [2024-11-29 11:59:02.180601] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:56.742 [2024-11-29 11:59:02.180718] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:56.742 [2024-11-29 11:59:02.180880] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:56.742 [2024-11-29 11:59:02.180948] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:56.742 [2024-11-29 11:59:02.181057] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:56.742 00:14:56.742 [2024-11-29 11:59:02.181139] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:57.307 00:14:57.307 real 0m1.176s 00:14:57.307 user 0m0.643s 00:14:57.307 sys 0m0.377s 00:14:57.307 11:59:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:57.307 11:59:02 -- common/autotest_common.sh@10 -- # set +x 00:14:57.307 ************************************ 00:14:57.307 END TEST bdev_hello_world 00:14:57.307 ************************************ 00:14:57.307 11:59:02 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:14:57.307 11:59:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:57.307 11:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.307 11:59:02 -- common/autotest_common.sh@10 -- # set +x 00:14:57.307 ************************************ 00:14:57.307 START TEST bdev_bounds 00:14:57.307 ************************************ 00:14:57.307 11:59:02 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:14:57.307 11:59:02 -- bdev/blockdev.sh@288 -- # bdevio_pid=120211 00:14:57.307 11:59:02 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:57.307 Process bdevio pid: 120211 00:14:57.307 11:59:02 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:57.307 11:59:02 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 120211' 00:14:57.307 11:59:02 -- bdev/blockdev.sh@291 -- # waitforlisten 120211 00:14:57.307 11:59:02 -- common/autotest_common.sh@829 -- # '[' -z 120211 ']' 00:14:57.307 11:59:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.307 11:59:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.307 11:59:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.307 11:59:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.307 11:59:02 -- common/autotest_common.sh@10 -- # set +x 00:14:57.307 [2024-11-29 11:59:02.709636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:57.307 [2024-11-29 11:59:02.709943] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120211 ] 00:14:57.571 [2024-11-29 11:59:02.884223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.571 [2024-11-29 11:59:02.971567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.571 [2024-11-29 11:59:02.971656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.571 [2024-11-29 11:59:02.971663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.848 [2024-11-29 11:59:03.119511] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:57.848 [2024-11-29 11:59:03.119653] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:57.848 [2024-11-29 11:59:03.127408] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:57.848 [2024-11-29 11:59:03.127499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:57.848 [2024-11-29 11:59:03.135539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:57.848 [2024-11-29 11:59:03.135654] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:57.848 [2024-11-29 11:59:03.135722] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:57.848 [2024-11-29 11:59:03.235384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:57.848 [2024-11-29 11:59:03.235526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.848 [2024-11-29 11:59:03.235810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:57.848 [2024-11-29 11:59:03.235854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.848 [2024-11-29 11:59:03.239114] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.848 [2024-11-29 11:59:03.239171] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:58.415 11:59:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.415 11:59:03 -- common/autotest_common.sh@862 -- # return 0 00:14:58.415 11:59:03 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:58.415 I/O targets: 00:14:58.415 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:14:58.415 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:14:58.415 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:14:58.415 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:14:58.415 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:14:58.416 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:14:58.416 raid0: 131072 blocks of 512 bytes (64 MiB) 00:14:58.416 concat0: 131072 blocks of 512 bytes (64 MiB) 00:14:58.416 raid1: 65536 blocks of 512 bytes (32 MiB) 00:14:58.416 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:14:58.416 00:14:58.416 00:14:58.416 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.416 http://cunit.sourceforge.net/ 00:14:58.416 00:14:58.416 00:14:58.416 Suite: bdevio tests on: AIO0 00:14:58.416 Test: blockdev write read block ...passed 00:14:58.416 Test: blockdev write zeroes read block ...passed 00:14:58.416 Test: blockdev write zeroes read no split ...passed 00:14:58.416 Test: blockdev write zeroes read split ...passed 00:14:58.416 Test: blockdev write zeroes read split partial ...passed 00:14:58.416 Test: blockdev reset ...passed 00:14:58.416 Test: blockdev write read 8 blocks ...passed 00:14:58.416 Test: blockdev write read size > 128k ...passed 00:14:58.416 Test: blockdev write read invalid size ...passed 00:14:58.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.416 Test: blockdev write read max offset ...passed 00:14:58.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.416 Test: blockdev writev readv 8 blocks ...passed 00:14:58.416 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.416 Test: blockdev writev readv block ...passed 00:14:58.416 Test: blockdev writev readv size > 128k ...passed 00:14:58.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.416 Test: blockdev comparev and writev ...passed 00:14:58.416 Test: blockdev nvme passthru rw ...passed 00:14:58.416 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.416 Test: blockdev nvme admin passthru ...passed 00:14:58.416 Test: blockdev copy ...passed 00:14:58.416 Suite: bdevio tests on: raid1 00:14:58.416 Test: blockdev write read block ...passed 00:14:58.416 Test: blockdev write zeroes read block ...passed 00:14:58.416 Test: blockdev write zeroes read no split ...passed 00:14:58.416 Test: blockdev write zeroes read split ...passed 00:14:58.416 Test: blockdev write zeroes read split partial ...passed 00:14:58.416 Test: blockdev reset ...passed 00:14:58.416 Test: blockdev write read 8 blocks ...passed 00:14:58.416 Test: blockdev write read size > 128k ...passed 00:14:58.416 Test: blockdev write read invalid size ...passed 00:14:58.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.416 Test: blockdev write read max offset ...passed 00:14:58.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.416 Test: blockdev writev readv 8 blocks ...passed 00:14:58.416 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.416 Test: blockdev writev readv block ...passed 00:14:58.416 Test: blockdev writev readv size > 128k ...passed 00:14:58.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.416 Test: blockdev comparev and writev ...passed 00:14:58.416 Test: blockdev nvme passthru rw ...passed 00:14:58.416 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.416 Test: blockdev nvme admin passthru ...passed 00:14:58.416 Test: blockdev copy ...passed 00:14:58.416 Suite: bdevio tests on: concat0 00:14:58.416 Test: blockdev write read block ...passed 00:14:58.416 Test: blockdev write zeroes read block ...passed 00:14:58.416 Test: blockdev write zeroes read no split ...passed 00:14:58.416 Test: blockdev write zeroes read split ...passed 00:14:58.416 Test: blockdev write zeroes read split partial ...passed 00:14:58.416 Test: blockdev reset ...passed 00:14:58.416 Test: blockdev write read 8 blocks ...passed 00:14:58.416 Test: blockdev write read size > 128k ...passed 00:14:58.416 Test: blockdev write read invalid size ...passed 00:14:58.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.416 Test: blockdev write read max offset ...passed 00:14:58.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.416 Test: blockdev writev readv 8 blocks ...passed 00:14:58.416 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.416 Test: blockdev writev readv block ...passed 00:14:58.416 Test: blockdev writev readv size > 128k ...passed 00:14:58.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.416 Test: blockdev comparev and writev ...passed 00:14:58.416 Test: blockdev nvme passthru rw ...passed 00:14:58.416 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.416 Test: blockdev nvme admin passthru ...passed 00:14:58.416 Test: blockdev copy ...passed 00:14:58.416 Suite: bdevio tests on: raid0 00:14:58.416 Test: blockdev write read block ...passed 00:14:58.416 Test: blockdev write zeroes read block ...passed 00:14:58.416 Test: blockdev write zeroes read no split ...passed 00:14:58.416 Test: blockdev write zeroes read split ...passed 00:14:58.416 Test: blockdev write zeroes read split partial ...passed 00:14:58.416 Test: blockdev reset ...passed 00:14:58.416 Test: blockdev write read 8 blocks ...passed 00:14:58.416 Test: blockdev write read size > 128k ...passed 00:14:58.416 Test: blockdev write read invalid size ...passed 00:14:58.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.416 Test: blockdev write read max offset ...passed 00:14:58.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.416 Test: blockdev writev readv 8 blocks ...passed 00:14:58.416 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.416 Test: blockdev writev readv block ...passed 00:14:58.416 Test: blockdev writev readv size > 128k ...passed 00:14:58.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.416 Test: blockdev comparev and writev ...passed 00:14:58.416 Test: blockdev nvme passthru rw ...passed 00:14:58.416 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.416 Test: blockdev nvme admin passthru ...passed 00:14:58.416 Test: blockdev copy ...passed 00:14:58.416 Suite: bdevio tests on: TestPT 00:14:58.416 Test: blockdev write read block ...passed 00:14:58.416 Test: blockdev write zeroes read block ...passed 00:14:58.416 Test: blockdev write zeroes read no split ...passed 00:14:58.416 Test: blockdev write zeroes read split ...passed 00:14:58.676 Test: blockdev write zeroes read split partial ...passed 00:14:58.676 Test: blockdev reset ...passed 00:14:58.676 Test: blockdev write read 8 blocks ...passed 00:14:58.676 Test: blockdev write read size > 128k ...passed 00:14:58.676 Test: blockdev write read invalid size ...passed 00:14:58.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.676 Test: blockdev write read max offset ...passed 00:14:58.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.676 Test: blockdev writev readv 8 blocks ...passed 00:14:58.676 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.676 Test: blockdev writev readv block ...passed 00:14:58.676 Test: blockdev writev readv size > 128k ...passed 00:14:58.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.676 Test: blockdev comparev and writev ...passed 00:14:58.676 Test: blockdev nvme passthru rw ...passed 00:14:58.676 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.676 Test: blockdev nvme admin passthru ...passed 00:14:58.676 Test: blockdev copy ...passed 00:14:58.676 Suite: bdevio tests on: Malloc2p7 00:14:58.676 Test: blockdev write read block ...passed 00:14:58.676 Test: blockdev write zeroes read block ...passed 00:14:58.676 Test: blockdev write zeroes read no split ...passed 00:14:58.676 Test: blockdev write zeroes read split ...passed 00:14:58.676 Test: blockdev write zeroes read split partial ...passed 00:14:58.676 Test: blockdev reset ...passed 00:14:58.676 Test: blockdev write read 8 blocks ...passed 00:14:58.676 Test: blockdev write read size > 128k ...passed 00:14:58.676 Test: blockdev write read invalid size ...passed 00:14:58.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.676 Test: blockdev write read max offset ...passed 00:14:58.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.676 Test: blockdev writev readv 8 blocks ...passed 00:14:58.676 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.676 Test: blockdev writev readv block ...passed 00:14:58.676 Test: blockdev writev readv size > 128k ...passed 00:14:58.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.676 Test: blockdev comparev and writev ...passed 00:14:58.676 Test: blockdev nvme passthru rw ...passed 00:14:58.676 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.676 Test: blockdev nvme admin passthru ...passed 00:14:58.676 Test: blockdev copy ...passed 00:14:58.676 Suite: bdevio tests on: Malloc2p6 00:14:58.676 Test: blockdev write read block ...passed 00:14:58.676 Test: blockdev write zeroes read block ...passed 00:14:58.676 Test: blockdev write zeroes read no split ...passed 00:14:58.676 Test: blockdev write zeroes read split ...passed 00:14:58.676 Test: blockdev write zeroes read split partial ...passed 00:14:58.676 Test: blockdev reset ...passed 00:14:58.676 Test: blockdev write read 8 blocks ...passed 00:14:58.676 Test: blockdev write read size > 128k ...passed 00:14:58.676 Test: blockdev write read invalid size ...passed 00:14:58.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.676 Test: blockdev write read max offset ...passed 00:14:58.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.676 Test: blockdev writev readv 8 blocks ...passed 00:14:58.676 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.676 Test: blockdev writev readv block ...passed 00:14:58.676 Test: blockdev writev readv size > 128k ...passed 00:14:58.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.676 Test: blockdev comparev and writev ...passed 00:14:58.676 Test: blockdev nvme passthru rw ...passed 00:14:58.676 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.676 Test: blockdev nvme admin passthru ...passed 00:14:58.676 Test: blockdev copy ...passed 00:14:58.676 Suite: bdevio tests on: Malloc2p5 00:14:58.676 Test: blockdev write read block ...passed 00:14:58.676 Test: blockdev write zeroes read block ...passed 00:14:58.676 Test: blockdev write zeroes read no split ...passed 00:14:58.676 Test: blockdev write zeroes read split ...passed 00:14:58.676 Test: blockdev write zeroes read split partial ...passed 00:14:58.676 Test: blockdev reset ...passed 00:14:58.676 Test: blockdev write read 8 blocks ...passed 00:14:58.676 Test: blockdev write read size > 128k ...passed 00:14:58.676 Test: blockdev write read invalid size ...passed 00:14:58.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.676 Test: blockdev write read max offset ...passed 00:14:58.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.676 Test: blockdev writev readv 8 blocks ...passed 00:14:58.676 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.676 Test: blockdev writev readv block ...passed 00:14:58.676 Test: blockdev writev readv size > 128k ...passed 00:14:58.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.676 Test: blockdev comparev and writev ...passed 00:14:58.676 Test: blockdev nvme passthru rw ...passed 00:14:58.676 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.676 Test: blockdev nvme admin passthru ...passed 00:14:58.676 Test: blockdev copy ...passed 00:14:58.676 Suite: bdevio tests on: Malloc2p4 00:14:58.676 Test: blockdev write read block ...passed 00:14:58.676 Test: blockdev write zeroes read block ...passed 00:14:58.676 Test: blockdev write zeroes read no split ...passed 00:14:58.676 Test: blockdev write zeroes read split ...passed 00:14:58.676 Test: blockdev write zeroes read split partial ...passed 00:14:58.676 Test: blockdev reset ...passed 00:14:58.676 Test: blockdev write read 8 blocks ...passed 00:14:58.676 Test: blockdev write read size > 128k ...passed 00:14:58.676 Test: blockdev write read invalid size ...passed 00:14:58.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.676 Test: blockdev write read max offset ...passed 00:14:58.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.676 Test: blockdev writev readv 8 blocks ...passed 00:14:58.676 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.676 Test: blockdev writev readv block ...passed 00:14:58.676 Test: blockdev writev readv size > 128k ...passed 00:14:58.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.676 Test: blockdev comparev and writev ...passed 00:14:58.676 Test: blockdev nvme passthru rw ...passed 00:14:58.676 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.676 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc2p3 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc2p2 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc2p1 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc2p0 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc1p1 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc1p0 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 Suite: bdevio tests on: Malloc0 00:14:58.677 Test: blockdev write read block ...passed 00:14:58.677 Test: blockdev write zeroes read block ...passed 00:14:58.677 Test: blockdev write zeroes read no split ...passed 00:14:58.677 Test: blockdev write zeroes read split ...passed 00:14:58.677 Test: blockdev write zeroes read split partial ...passed 00:14:58.677 Test: blockdev reset ...passed 00:14:58.677 Test: blockdev write read 8 blocks ...passed 00:14:58.677 Test: blockdev write read size > 128k ...passed 00:14:58.677 Test: blockdev write read invalid size ...passed 00:14:58.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.677 Test: blockdev write read max offset ...passed 00:14:58.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.677 Test: blockdev writev readv 8 blocks ...passed 00:14:58.677 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.677 Test: blockdev writev readv block ...passed 00:14:58.677 Test: blockdev writev readv size > 128k ...passed 00:14:58.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.677 Test: blockdev comparev and writev ...passed 00:14:58.677 Test: blockdev nvme passthru rw ...passed 00:14:58.677 Test: blockdev nvme passthru vendor specific ...passed 00:14:58.677 Test: blockdev nvme admin passthru ...passed 00:14:58.677 Test: blockdev copy ...passed 00:14:58.677 00:14:58.677 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.677 suites 16 16 n/a 0 0 00:14:58.677 tests 368 368 368 0 0 00:14:58.677 asserts 2224 2224 2224 0 n/a 00:14:58.677 00:14:58.677 Elapsed time = 0.667 seconds 00:14:58.677 0 00:14:58.677 11:59:04 -- bdev/blockdev.sh@293 -- # killprocess 120211 00:14:58.677 11:59:04 -- common/autotest_common.sh@936 -- # '[' -z 120211 ']' 00:14:58.678 11:59:04 -- common/autotest_common.sh@940 -- # kill -0 120211 00:14:58.678 11:59:04 -- common/autotest_common.sh@941 -- # uname 00:14:58.678 11:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.678 11:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120211 00:14:58.678 11:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:58.678 killing process with pid 120211 00:14:58.678 11:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:58.678 11:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120211' 00:14:58.678 11:59:04 -- common/autotest_common.sh@955 -- # kill 120211 00:14:58.678 11:59:04 -- common/autotest_common.sh@960 -- # wait 120211 00:14:59.244 11:59:04 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:14:59.244 00:14:59.244 real 0m1.877s 00:14:59.244 user 0m4.522s 00:14:59.244 sys 0m0.440s 00:14:59.244 ************************************ 00:14:59.244 END TEST bdev_bounds 00:14:59.244 ************************************ 00:14:59.244 11:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:59.244 11:59:04 -- common/autotest_common.sh@10 -- # set +x 00:14:59.244 11:59:04 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:14:59.244 11:59:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:59.244 11:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.244 11:59:04 -- common/autotest_common.sh@10 -- # set +x 00:14:59.244 ************************************ 00:14:59.244 START TEST bdev_nbd 00:14:59.244 ************************************ 00:14:59.245 11:59:04 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:14:59.245 11:59:04 -- bdev/blockdev.sh@298 -- # uname -s 00:14:59.245 11:59:04 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:14:59.245 11:59:04 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.245 11:59:04 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:59.245 11:59:04 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:59.245 11:59:04 -- bdev/blockdev.sh@302 -- # local bdev_all 00:14:59.245 11:59:04 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:14:59.245 11:59:04 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:14:59.245 11:59:04 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:59.245 11:59:04 -- bdev/blockdev.sh@309 -- # local nbd_all 00:14:59.245 11:59:04 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:14:59.245 11:59:04 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:59.245 11:59:04 -- bdev/blockdev.sh@312 -- # local nbd_list 00:14:59.245 11:59:04 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:59.245 11:59:04 -- bdev/blockdev.sh@313 -- # local bdev_list 00:14:59.245 11:59:04 -- bdev/blockdev.sh@316 -- # nbd_pid=120269 00:14:59.245 11:59:04 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:59.245 11:59:04 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:59.245 11:59:04 -- bdev/blockdev.sh@318 -- # waitforlisten 120269 /var/tmp/spdk-nbd.sock 00:14:59.245 11:59:04 -- common/autotest_common.sh@829 -- # '[' -z 120269 ']' 00:14:59.245 11:59:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:59.245 11:59:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:59.245 11:59:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:59.245 11:59:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.245 11:59:04 -- common/autotest_common.sh@10 -- # set +x 00:14:59.245 [2024-11-29 11:59:04.641538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:59.245 [2024-11-29 11:59:04.641723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.503 [2024-11-29 11:59:04.778923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.503 [2024-11-29 11:59:04.848923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.503 [2024-11-29 11:59:05.001384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:59.503 [2024-11-29 11:59:05.001534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:59.503 [2024-11-29 11:59:05.009325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:59.503 [2024-11-29 11:59:05.009410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:59.767 [2024-11-29 11:59:05.017371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:59.767 [2024-11-29 11:59:05.017480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:59.767 [2024-11-29 11:59:05.017537] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:59.767 [2024-11-29 11:59:05.115910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:59.767 [2024-11-29 11:59:05.116059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.767 [2024-11-29 11:59:05.116147] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:59.767 [2024-11-29 11:59:05.116203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.767 [2024-11-29 11:59:05.119045] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.767 [2024-11-29 11:59:05.119111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:00.338 11:59:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.338 11:59:05 -- common/autotest_common.sh@862 -- # return 0 00:15:00.338 11:59:05 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@24 -- # local i 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:00.338 11:59:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:15:00.597 11:59:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:00.597 11:59:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:00.597 11:59:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:00.597 11:59:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:00.597 11:59:05 -- common/autotest_common.sh@867 -- # local i 00:15:00.597 11:59:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:00.597 11:59:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:00.597 11:59:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:00.597 11:59:05 -- common/autotest_common.sh@871 -- # break 00:15:00.597 11:59:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:00.597 11:59:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:00.597 11:59:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.597 1+0 records in 00:15:00.597 1+0 records out 00:15:00.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189823 s, 21.6 MB/s 00:15:00.597 11:59:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.597 11:59:05 -- common/autotest_common.sh@884 -- # size=4096 00:15:00.597 11:59:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.597 11:59:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:00.597 11:59:05 -- common/autotest_common.sh@887 -- # return 0 00:15:00.597 11:59:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:00.597 11:59:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:00.597 11:59:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:15:00.856 11:59:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:00.856 11:59:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:00.856 11:59:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:00.856 11:59:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:00.856 11:59:06 -- common/autotest_common.sh@867 -- # local i 00:15:00.856 11:59:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:00.856 11:59:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:00.856 11:59:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:00.856 11:59:06 -- common/autotest_common.sh@871 -- # break 00:15:00.856 11:59:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:00.856 11:59:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:00.856 11:59:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.856 1+0 records in 00:15:00.856 1+0 records out 00:15:00.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815315 s, 5.0 MB/s 00:15:00.856 11:59:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.856 11:59:06 -- common/autotest_common.sh@884 -- # size=4096 00:15:00.856 11:59:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.856 11:59:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:00.856 11:59:06 -- common/autotest_common.sh@887 -- # return 0 00:15:00.856 11:59:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:00.856 11:59:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:00.856 11:59:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:15:01.114 11:59:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:01.114 11:59:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:01.114 11:59:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:01.114 11:59:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:01.114 11:59:06 -- common/autotest_common.sh@867 -- # local i 00:15:01.114 11:59:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:01.114 11:59:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:01.114 11:59:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:01.114 11:59:06 -- common/autotest_common.sh@871 -- # break 00:15:01.114 11:59:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:01.114 11:59:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:01.114 11:59:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.114 1+0 records in 00:15:01.114 1+0 records out 00:15:01.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553556 s, 7.4 MB/s 00:15:01.114 11:59:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.114 11:59:06 -- common/autotest_common.sh@884 -- # size=4096 00:15:01.114 11:59:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.114 11:59:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:01.114 11:59:06 -- common/autotest_common.sh@887 -- # return 0 00:15:01.114 11:59:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:01.114 11:59:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:01.114 11:59:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:15:01.372 11:59:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:01.372 11:59:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:01.372 11:59:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:01.372 11:59:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:01.372 11:59:06 -- common/autotest_common.sh@867 -- # local i 00:15:01.372 11:59:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:01.372 11:59:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:01.372 11:59:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:01.373 11:59:06 -- common/autotest_common.sh@871 -- # break 00:15:01.373 11:59:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:01.373 11:59:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:01.373 11:59:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.373 1+0 records in 00:15:01.373 1+0 records out 00:15:01.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000934635 s, 4.4 MB/s 00:15:01.373 11:59:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.373 11:59:06 -- common/autotest_common.sh@884 -- # size=4096 00:15:01.373 11:59:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.373 11:59:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:01.373 11:59:06 -- common/autotest_common.sh@887 -- # return 0 00:15:01.373 11:59:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:01.373 11:59:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:01.373 11:59:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:15:01.631 11:59:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:01.631 11:59:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:01.631 11:59:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:01.631 11:59:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:01.631 11:59:07 -- common/autotest_common.sh@867 -- # local i 00:15:01.631 11:59:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:01.631 11:59:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:01.631 11:59:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:01.631 11:59:07 -- common/autotest_common.sh@871 -- # break 00:15:01.631 11:59:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:01.631 11:59:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:01.631 11:59:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.631 1+0 records in 00:15:01.631 1+0 records out 00:15:01.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721927 s, 5.7 MB/s 00:15:01.631 11:59:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.631 11:59:07 -- common/autotest_common.sh@884 -- # size=4096 00:15:01.631 11:59:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.631 11:59:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:01.631 11:59:07 -- common/autotest_common.sh@887 -- # return 0 00:15:01.631 11:59:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:01.631 11:59:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:01.631 11:59:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:15:01.889 11:59:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:01.889 11:59:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:01.889 11:59:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:01.889 11:59:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:01.889 11:59:07 -- common/autotest_common.sh@867 -- # local i 00:15:01.889 11:59:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:01.889 11:59:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:01.889 11:59:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:01.889 11:59:07 -- common/autotest_common.sh@871 -- # break 00:15:01.889 11:59:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:01.889 11:59:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:01.889 11:59:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.889 1+0 records in 00:15:01.889 1+0 records out 00:15:01.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767573 s, 5.3 MB/s 00:15:01.889 11:59:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.889 11:59:07 -- common/autotest_common.sh@884 -- # size=4096 00:15:01.889 11:59:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.889 11:59:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:01.889 11:59:07 -- common/autotest_common.sh@887 -- # return 0 00:15:01.889 11:59:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:01.889 11:59:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:01.889 11:59:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:15:02.457 11:59:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:15:02.457 11:59:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:15:02.457 11:59:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:15:02.457 11:59:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:15:02.457 11:59:07 -- common/autotest_common.sh@867 -- # local i 00:15:02.457 11:59:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:02.457 11:59:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:02.457 11:59:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:15:02.457 11:59:07 -- common/autotest_common.sh@871 -- # break 00:15:02.457 11:59:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:02.457 11:59:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:02.457 11:59:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.457 1+0 records in 00:15:02.457 1+0 records out 00:15:02.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587348 s, 7.0 MB/s 00:15:02.457 11:59:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.457 11:59:07 -- common/autotest_common.sh@884 -- # size=4096 00:15:02.457 11:59:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.457 11:59:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:02.457 11:59:07 -- common/autotest_common.sh@887 -- # return 0 00:15:02.457 11:59:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:02.457 11:59:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:02.457 11:59:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:15:02.715 11:59:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:15:02.715 11:59:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:15:02.715 11:59:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:15:02.715 11:59:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:15:02.715 11:59:08 -- common/autotest_common.sh@867 -- # local i 00:15:02.715 11:59:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:02.715 11:59:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:02.715 11:59:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:15:02.715 11:59:08 -- common/autotest_common.sh@871 -- # break 00:15:02.715 11:59:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:02.715 11:59:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:02.715 11:59:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.715 1+0 records in 00:15:02.715 1+0 records out 00:15:02.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049044 s, 8.4 MB/s 00:15:02.715 11:59:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.715 11:59:08 -- common/autotest_common.sh@884 -- # size=4096 00:15:02.715 11:59:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.715 11:59:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:02.715 11:59:08 -- common/autotest_common.sh@887 -- # return 0 00:15:02.715 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:02.715 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:02.715 11:59:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:15:02.974 11:59:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:15:02.974 11:59:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:15:02.974 11:59:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:15:02.974 11:59:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:15:02.974 11:59:08 -- common/autotest_common.sh@867 -- # local i 00:15:02.974 11:59:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:02.974 11:59:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:02.974 11:59:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:15:02.974 11:59:08 -- common/autotest_common.sh@871 -- # break 00:15:02.974 11:59:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:02.974 11:59:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:02.974 11:59:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.974 1+0 records in 00:15:02.974 1+0 records out 00:15:02.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462261 s, 8.9 MB/s 00:15:02.974 11:59:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.974 11:59:08 -- common/autotest_common.sh@884 -- # size=4096 00:15:02.974 11:59:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.974 11:59:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:02.974 11:59:08 -- common/autotest_common.sh@887 -- # return 0 00:15:02.974 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:02.974 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:02.974 11:59:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:15:03.232 11:59:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:15:03.232 11:59:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:15:03.232 11:59:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:15:03.232 11:59:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:15:03.232 11:59:08 -- common/autotest_common.sh@867 -- # local i 00:15:03.232 11:59:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.232 11:59:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.232 11:59:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:15:03.232 11:59:08 -- common/autotest_common.sh@871 -- # break 00:15:03.232 11:59:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.232 11:59:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.232 11:59:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.232 1+0 records in 00:15:03.232 1+0 records out 00:15:03.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000943595 s, 4.3 MB/s 00:15:03.232 11:59:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.232 11:59:08 -- common/autotest_common.sh@884 -- # size=4096 00:15:03.232 11:59:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.232 11:59:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.232 11:59:08 -- common/autotest_common.sh@887 -- # return 0 00:15:03.232 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.232 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:03.232 11:59:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:15:03.492 11:59:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:15:03.492 11:59:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:15:03.492 11:59:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:15:03.492 11:59:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:15:03.492 11:59:08 -- common/autotest_common.sh@867 -- # local i 00:15:03.492 11:59:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.492 11:59:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.492 11:59:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:15:03.492 11:59:08 -- common/autotest_common.sh@871 -- # break 00:15:03.492 11:59:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.492 11:59:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.492 11:59:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.492 1+0 records in 00:15:03.492 1+0 records out 00:15:03.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782925 s, 5.2 MB/s 00:15:03.492 11:59:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.492 11:59:08 -- common/autotest_common.sh@884 -- # size=4096 00:15:03.492 11:59:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.492 11:59:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.492 11:59:08 -- common/autotest_common.sh@887 -- # return 0 00:15:03.492 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.492 11:59:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:03.492 11:59:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:15:03.753 11:59:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:15:03.753 11:59:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:15:03.753 11:59:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:15:03.753 11:59:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:15:03.753 11:59:09 -- common/autotest_common.sh@867 -- # local i 00:15:03.753 11:59:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.753 11:59:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.753 11:59:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:15:03.753 11:59:09 -- common/autotest_common.sh@871 -- # break 00:15:03.753 11:59:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.753 11:59:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.753 11:59:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.753 1+0 records in 00:15:03.753 1+0 records out 00:15:03.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654644 s, 6.3 MB/s 00:15:03.753 11:59:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.753 11:59:09 -- common/autotest_common.sh@884 -- # size=4096 00:15:03.753 11:59:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.753 11:59:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.753 11:59:09 -- common/autotest_common.sh@887 -- # return 0 00:15:03.753 11:59:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.753 11:59:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:03.753 11:59:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:15:04.011 11:59:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:15:04.011 11:59:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:15:04.011 11:59:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:15:04.011 11:59:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:15:04.011 11:59:09 -- common/autotest_common.sh@867 -- # local i 00:15:04.011 11:59:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:04.011 11:59:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:04.011 11:59:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:15:04.011 11:59:09 -- common/autotest_common.sh@871 -- # break 00:15:04.011 11:59:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:04.011 11:59:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:04.011 11:59:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.011 1+0 records in 00:15:04.011 1+0 records out 00:15:04.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658993 s, 6.2 MB/s 00:15:04.011 11:59:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.011 11:59:09 -- common/autotest_common.sh@884 -- # size=4096 00:15:04.011 11:59:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.011 11:59:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:04.011 11:59:09 -- common/autotest_common.sh@887 -- # return 0 00:15:04.011 11:59:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.011 11:59:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:04.011 11:59:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:15:04.269 11:59:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:15:04.269 11:59:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:15:04.528 11:59:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:15:04.528 11:59:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:15:04.528 11:59:09 -- common/autotest_common.sh@867 -- # local i 00:15:04.528 11:59:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:04.528 11:59:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:04.528 11:59:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:15:04.528 11:59:09 -- common/autotest_common.sh@871 -- # break 00:15:04.528 11:59:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:04.528 11:59:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:04.528 11:59:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.528 1+0 records in 00:15:04.528 1+0 records out 00:15:04.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996632 s, 4.1 MB/s 00:15:04.528 11:59:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.528 11:59:09 -- common/autotest_common.sh@884 -- # size=4096 00:15:04.528 11:59:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.528 11:59:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:04.528 11:59:09 -- common/autotest_common.sh@887 -- # return 0 00:15:04.528 11:59:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.528 11:59:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:04.528 11:59:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:15:04.786 11:59:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:15:04.786 11:59:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:15:04.786 11:59:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:15:04.786 11:59:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:15:04.786 11:59:10 -- common/autotest_common.sh@867 -- # local i 00:15:04.786 11:59:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:04.786 11:59:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:04.786 11:59:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:15:04.786 11:59:10 -- common/autotest_common.sh@871 -- # break 00:15:04.786 11:59:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:04.786 11:59:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:04.786 11:59:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.786 1+0 records in 00:15:04.786 1+0 records out 00:15:04.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928321 s, 4.4 MB/s 00:15:04.786 11:59:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.786 11:59:10 -- common/autotest_common.sh@884 -- # size=4096 00:15:04.786 11:59:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.786 11:59:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:04.786 11:59:10 -- common/autotest_common.sh@887 -- # return 0 00:15:04.786 11:59:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.786 11:59:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:04.786 11:59:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:15:05.045 11:59:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:15:05.045 11:59:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:15:05.045 11:59:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:15:05.045 11:59:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:15:05.045 11:59:10 -- common/autotest_common.sh@867 -- # local i 00:15:05.045 11:59:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:05.045 11:59:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:05.045 11:59:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:15:05.045 11:59:10 -- common/autotest_common.sh@871 -- # break 00:15:05.045 11:59:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:05.045 11:59:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:05.045 11:59:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.045 1+0 records in 00:15:05.045 1+0 records out 00:15:05.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104444 s, 3.9 MB/s 00:15:05.045 11:59:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.045 11:59:10 -- common/autotest_common.sh@884 -- # size=4096 00:15:05.045 11:59:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.045 11:59:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:05.045 11:59:10 -- common/autotest_common.sh@887 -- # return 0 00:15:05.045 11:59:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:05.045 11:59:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:05.045 11:59:10 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:05.303 11:59:10 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd0", 00:15:05.303 "bdev_name": "Malloc0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd1", 00:15:05.303 "bdev_name": "Malloc1p0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd2", 00:15:05.303 "bdev_name": "Malloc1p1" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd3", 00:15:05.303 "bdev_name": "Malloc2p0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd4", 00:15:05.303 "bdev_name": "Malloc2p1" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd5", 00:15:05.303 "bdev_name": "Malloc2p2" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd6", 00:15:05.303 "bdev_name": "Malloc2p3" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd7", 00:15:05.303 "bdev_name": "Malloc2p4" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd8", 00:15:05.303 "bdev_name": "Malloc2p5" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd9", 00:15:05.303 "bdev_name": "Malloc2p6" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd10", 00:15:05.303 "bdev_name": "Malloc2p7" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd11", 00:15:05.303 "bdev_name": "TestPT" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd12", 00:15:05.303 "bdev_name": "raid0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd13", 00:15:05.303 "bdev_name": "concat0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd14", 00:15:05.303 "bdev_name": "raid1" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd15", 00:15:05.303 "bdev_name": "AIO0" 00:15:05.303 } 00:15:05.303 ]' 00:15:05.303 11:59:10 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:05.303 11:59:10 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:05.303 11:59:10 -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd0", 00:15:05.303 "bdev_name": "Malloc0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd1", 00:15:05.303 "bdev_name": "Malloc1p0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd2", 00:15:05.303 "bdev_name": "Malloc1p1" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd3", 00:15:05.303 "bdev_name": "Malloc2p0" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd4", 00:15:05.303 "bdev_name": "Malloc2p1" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd5", 00:15:05.303 "bdev_name": "Malloc2p2" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd6", 00:15:05.303 "bdev_name": "Malloc2p3" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd7", 00:15:05.303 "bdev_name": "Malloc2p4" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd8", 00:15:05.303 "bdev_name": "Malloc2p5" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd9", 00:15:05.303 "bdev_name": "Malloc2p6" 00:15:05.303 }, 00:15:05.303 { 00:15:05.303 "nbd_device": "/dev/nbd10", 00:15:05.303 "bdev_name": "Malloc2p7" 00:15:05.304 }, 00:15:05.304 { 00:15:05.304 "nbd_device": "/dev/nbd11", 00:15:05.304 "bdev_name": "TestPT" 00:15:05.304 }, 00:15:05.304 { 00:15:05.304 "nbd_device": "/dev/nbd12", 00:15:05.304 "bdev_name": "raid0" 00:15:05.304 }, 00:15:05.304 { 00:15:05.304 "nbd_device": "/dev/nbd13", 00:15:05.304 "bdev_name": "concat0" 00:15:05.304 }, 00:15:05.304 { 00:15:05.304 "nbd_device": "/dev/nbd14", 00:15:05.304 "bdev_name": "raid1" 00:15:05.304 }, 00:15:05.304 { 00:15:05.304 "nbd_device": "/dev/nbd15", 00:15:05.304 "bdev_name": "AIO0" 00:15:05.304 } 00:15:05.304 ]' 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@51 -- # local i 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.304 11:59:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@41 -- # break 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.561 11:59:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@41 -- # break 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.818 11:59:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@41 -- # break 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.076 11:59:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:06.334 11:59:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@41 -- # break 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.335 11:59:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@41 -- # break 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:06.900 11:59:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@41 -- # break 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.901 11:59:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@41 -- # break 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.158 11:59:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:15:07.417 11:59:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:15:07.417 11:59:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:15:07.417 11:59:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:15:07.417 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.417 11:59:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.417 11:59:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:15:07.676 11:59:12 -- bdev/nbd_common.sh@41 -- # break 00:15:07.676 11:59:12 -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.676 11:59:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.676 11:59:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@41 -- # break 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.676 11:59:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@41 -- # break 00:15:07.934 11:59:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.935 11:59:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.935 11:59:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:08.194 11:59:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:08.194 11:59:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:08.194 11:59:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:08.194 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.194 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.194 11:59:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@41 -- # break 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@41 -- # break 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.453 11:59:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@41 -- # break 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.712 11:59:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@41 -- # break 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.971 11:59:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@41 -- # break 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.231 11:59:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@41 -- # break 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.496 11:59:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:09.753 11:59:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:09.753 11:59:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:09.753 11:59:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:10.010 11:59:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@65 -- # true 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@65 -- # count=0 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@122 -- # count=0 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@127 -- # return 0 00:15:10.011 11:59:15 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@12 -- # local i 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:10.011 11:59:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:10.269 /dev/nbd0 00:15:10.269 11:59:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:10.269 11:59:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.269 11:59:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:10.269 11:59:15 -- common/autotest_common.sh@867 -- # local i 00:15:10.269 11:59:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:10.269 11:59:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:10.269 11:59:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:10.269 11:59:15 -- common/autotest_common.sh@871 -- # break 00:15:10.269 11:59:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:10.269 11:59:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:10.269 11:59:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.269 1+0 records in 00:15:10.269 1+0 records out 00:15:10.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582498 s, 7.0 MB/s 00:15:10.269 11:59:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.269 11:59:15 -- common/autotest_common.sh@884 -- # size=4096 00:15:10.269 11:59:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.269 11:59:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:10.269 11:59:15 -- common/autotest_common.sh@887 -- # return 0 00:15:10.269 11:59:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.269 11:59:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:10.269 11:59:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:15:10.527 /dev/nbd1 00:15:10.527 11:59:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.527 11:59:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.527 11:59:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:10.527 11:59:15 -- common/autotest_common.sh@867 -- # local i 00:15:10.527 11:59:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:10.528 11:59:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:10.528 11:59:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:10.528 11:59:15 -- common/autotest_common.sh@871 -- # break 00:15:10.528 11:59:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:10.528 11:59:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:10.528 11:59:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.528 1+0 records in 00:15:10.528 1+0 records out 00:15:10.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265537 s, 15.4 MB/s 00:15:10.528 11:59:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.528 11:59:15 -- common/autotest_common.sh@884 -- # size=4096 00:15:10.528 11:59:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.528 11:59:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:10.528 11:59:15 -- common/autotest_common.sh@887 -- # return 0 00:15:10.528 11:59:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.528 11:59:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:10.528 11:59:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:15:10.787 /dev/nbd10 00:15:10.787 11:59:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:10.787 11:59:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:10.787 11:59:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:15:10.787 11:59:16 -- common/autotest_common.sh@867 -- # local i 00:15:10.787 11:59:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:10.787 11:59:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:10.787 11:59:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:15:10.787 11:59:16 -- common/autotest_common.sh@871 -- # break 00:15:10.787 11:59:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:10.787 11:59:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:10.787 11:59:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.787 1+0 records in 00:15:10.787 1+0 records out 00:15:10.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686875 s, 6.0 MB/s 00:15:10.787 11:59:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.787 11:59:16 -- common/autotest_common.sh@884 -- # size=4096 00:15:10.787 11:59:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.787 11:59:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:10.787 11:59:16 -- common/autotest_common.sh@887 -- # return 0 00:15:10.787 11:59:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.787 11:59:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:10.787 11:59:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:15:11.046 /dev/nbd11 00:15:11.305 11:59:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:11.305 11:59:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:11.305 11:59:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:15:11.305 11:59:16 -- common/autotest_common.sh@867 -- # local i 00:15:11.305 11:59:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.305 11:59:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.305 11:59:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:15:11.305 11:59:16 -- common/autotest_common.sh@871 -- # break 00:15:11.305 11:59:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.305 11:59:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.305 11:59:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.305 1+0 records in 00:15:11.305 1+0 records out 00:15:11.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469435 s, 8.7 MB/s 00:15:11.305 11:59:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.305 11:59:16 -- common/autotest_common.sh@884 -- # size=4096 00:15:11.305 11:59:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.305 11:59:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.305 11:59:16 -- common/autotest_common.sh@887 -- # return 0 00:15:11.305 11:59:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.305 11:59:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:11.305 11:59:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:15:11.563 /dev/nbd12 00:15:11.563 11:59:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:11.563 11:59:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:11.563 11:59:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:15:11.563 11:59:16 -- common/autotest_common.sh@867 -- # local i 00:15:11.563 11:59:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.563 11:59:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.563 11:59:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:15:11.563 11:59:16 -- common/autotest_common.sh@871 -- # break 00:15:11.563 11:59:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.563 11:59:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.563 11:59:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.563 1+0 records in 00:15:11.563 1+0 records out 00:15:11.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000956861 s, 4.3 MB/s 00:15:11.563 11:59:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.563 11:59:16 -- common/autotest_common.sh@884 -- # size=4096 00:15:11.563 11:59:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.563 11:59:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.563 11:59:16 -- common/autotest_common.sh@887 -- # return 0 00:15:11.563 11:59:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.563 11:59:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:11.563 11:59:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:15:11.821 /dev/nbd13 00:15:11.821 11:59:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:11.821 11:59:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:11.821 11:59:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:15:11.821 11:59:17 -- common/autotest_common.sh@867 -- # local i 00:15:11.821 11:59:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.821 11:59:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.821 11:59:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:15:11.821 11:59:17 -- common/autotest_common.sh@871 -- # break 00:15:11.821 11:59:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.821 11:59:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.821 11:59:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.821 1+0 records in 00:15:11.821 1+0 records out 00:15:11.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603974 s, 6.8 MB/s 00:15:11.821 11:59:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.821 11:59:17 -- common/autotest_common.sh@884 -- # size=4096 00:15:11.821 11:59:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.821 11:59:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.821 11:59:17 -- common/autotest_common.sh@887 -- # return 0 00:15:11.821 11:59:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.821 11:59:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:11.821 11:59:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:15:12.079 /dev/nbd14 00:15:12.079 11:59:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:15:12.079 11:59:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:15:12.079 11:59:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:15:12.079 11:59:17 -- common/autotest_common.sh@867 -- # local i 00:15:12.079 11:59:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.079 11:59:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.079 11:59:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:15:12.079 11:59:17 -- common/autotest_common.sh@871 -- # break 00:15:12.079 11:59:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.079 11:59:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.079 11:59:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.079 1+0 records in 00:15:12.079 1+0 records out 00:15:12.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684095 s, 6.0 MB/s 00:15:12.079 11:59:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.079 11:59:17 -- common/autotest_common.sh@884 -- # size=4096 00:15:12.079 11:59:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.079 11:59:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.079 11:59:17 -- common/autotest_common.sh@887 -- # return 0 00:15:12.079 11:59:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.079 11:59:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:12.079 11:59:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:15:12.338 /dev/nbd15 00:15:12.338 11:59:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:15:12.338 11:59:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:15:12.338 11:59:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:15:12.338 11:59:17 -- common/autotest_common.sh@867 -- # local i 00:15:12.338 11:59:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.338 11:59:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.338 11:59:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:15:12.338 11:59:17 -- common/autotest_common.sh@871 -- # break 00:15:12.338 11:59:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.338 11:59:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.338 11:59:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.338 1+0 records in 00:15:12.338 1+0 records out 00:15:12.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743742 s, 5.5 MB/s 00:15:12.338 11:59:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.338 11:59:17 -- common/autotest_common.sh@884 -- # size=4096 00:15:12.338 11:59:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.338 11:59:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.338 11:59:17 -- common/autotest_common.sh@887 -- # return 0 00:15:12.338 11:59:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.338 11:59:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:12.338 11:59:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:15:12.599 /dev/nbd2 00:15:12.599 11:59:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:15:12.599 11:59:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:15:12.599 11:59:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:12.599 11:59:17 -- common/autotest_common.sh@867 -- # local i 00:15:12.599 11:59:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.599 11:59:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.599 11:59:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:12.599 11:59:17 -- common/autotest_common.sh@871 -- # break 00:15:12.599 11:59:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.599 11:59:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.599 11:59:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.599 1+0 records in 00:15:12.600 1+0 records out 00:15:12.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922692 s, 4.4 MB/s 00:15:12.600 11:59:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.600 11:59:18 -- common/autotest_common.sh@884 -- # size=4096 00:15:12.600 11:59:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.600 11:59:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.600 11:59:18 -- common/autotest_common.sh@887 -- # return 0 00:15:12.600 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.600 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:12.600 11:59:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:15:12.858 /dev/nbd3 00:15:12.858 11:59:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:15:12.858 11:59:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:15:12.858 11:59:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:12.858 11:59:18 -- common/autotest_common.sh@867 -- # local i 00:15:12.858 11:59:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.858 11:59:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.858 11:59:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:12.858 11:59:18 -- common/autotest_common.sh@871 -- # break 00:15:12.858 11:59:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.858 11:59:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.858 11:59:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.858 1+0 records in 00:15:12.858 1+0 records out 00:15:12.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958726 s, 4.3 MB/s 00:15:12.858 11:59:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.858 11:59:18 -- common/autotest_common.sh@884 -- # size=4096 00:15:12.858 11:59:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.858 11:59:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.858 11:59:18 -- common/autotest_common.sh@887 -- # return 0 00:15:12.858 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.858 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:12.858 11:59:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:15:13.116 /dev/nbd4 00:15:13.116 11:59:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:15:13.116 11:59:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:15:13.116 11:59:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:13.116 11:59:18 -- common/autotest_common.sh@867 -- # local i 00:15:13.116 11:59:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.116 11:59:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.116 11:59:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:13.116 11:59:18 -- common/autotest_common.sh@871 -- # break 00:15:13.116 11:59:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.116 11:59:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.116 11:59:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.116 1+0 records in 00:15:13.116 1+0 records out 00:15:13.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127117 s, 3.2 MB/s 00:15:13.116 11:59:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.116 11:59:18 -- common/autotest_common.sh@884 -- # size=4096 00:15:13.116 11:59:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.116 11:59:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.116 11:59:18 -- common/autotest_common.sh@887 -- # return 0 00:15:13.116 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.116 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:13.116 11:59:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:15:13.375 /dev/nbd5 00:15:13.375 11:59:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:15:13.375 11:59:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:15:13.375 11:59:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:13.375 11:59:18 -- common/autotest_common.sh@867 -- # local i 00:15:13.375 11:59:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.375 11:59:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.375 11:59:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:13.375 11:59:18 -- common/autotest_common.sh@871 -- # break 00:15:13.375 11:59:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.375 11:59:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.375 11:59:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.375 1+0 records in 00:15:13.375 1+0 records out 00:15:13.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129291 s, 3.2 MB/s 00:15:13.375 11:59:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.375 11:59:18 -- common/autotest_common.sh@884 -- # size=4096 00:15:13.375 11:59:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.375 11:59:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.375 11:59:18 -- common/autotest_common.sh@887 -- # return 0 00:15:13.375 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.375 11:59:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:13.375 11:59:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:15:13.634 /dev/nbd6 00:15:13.634 11:59:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:15:13.634 11:59:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:15:13.634 11:59:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:15:13.634 11:59:19 -- common/autotest_common.sh@867 -- # local i 00:15:13.634 11:59:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.634 11:59:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.634 11:59:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:15:13.634 11:59:19 -- common/autotest_common.sh@871 -- # break 00:15:13.634 11:59:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.634 11:59:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.634 11:59:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.634 1+0 records in 00:15:13.634 1+0 records out 00:15:13.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875898 s, 4.7 MB/s 00:15:13.634 11:59:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.634 11:59:19 -- common/autotest_common.sh@884 -- # size=4096 00:15:13.634 11:59:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.634 11:59:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.634 11:59:19 -- common/autotest_common.sh@887 -- # return 0 00:15:13.634 11:59:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.634 11:59:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:13.634 11:59:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:15:13.892 /dev/nbd7 00:15:13.892 11:59:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:15:13.892 11:59:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:15:13.892 11:59:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:15:13.892 11:59:19 -- common/autotest_common.sh@867 -- # local i 00:15:13.892 11:59:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.892 11:59:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.892 11:59:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:15:13.892 11:59:19 -- common/autotest_common.sh@871 -- # break 00:15:13.892 11:59:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.892 11:59:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.892 11:59:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.892 1+0 records in 00:15:13.892 1+0 records out 00:15:13.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127235 s, 3.2 MB/s 00:15:13.892 11:59:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.892 11:59:19 -- common/autotest_common.sh@884 -- # size=4096 00:15:13.892 11:59:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.892 11:59:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.892 11:59:19 -- common/autotest_common.sh@887 -- # return 0 00:15:13.892 11:59:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.892 11:59:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:13.892 11:59:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:15:14.459 /dev/nbd8 00:15:14.459 11:59:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:15:14.459 11:59:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:15:14.459 11:59:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:15:14.459 11:59:19 -- common/autotest_common.sh@867 -- # local i 00:15:14.459 11:59:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:14.459 11:59:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:14.459 11:59:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:15:14.459 11:59:19 -- common/autotest_common.sh@871 -- # break 00:15:14.459 11:59:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:14.459 11:59:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:14.459 11:59:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.459 1+0 records in 00:15:14.459 1+0 records out 00:15:14.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135596 s, 3.0 MB/s 00:15:14.459 11:59:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.459 11:59:19 -- common/autotest_common.sh@884 -- # size=4096 00:15:14.459 11:59:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.459 11:59:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:14.459 11:59:19 -- common/autotest_common.sh@887 -- # return 0 00:15:14.459 11:59:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.459 11:59:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:14.459 11:59:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:15:14.717 /dev/nbd9 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:15:14.717 11:59:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:15:14.717 11:59:20 -- common/autotest_common.sh@867 -- # local i 00:15:14.717 11:59:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:14.717 11:59:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:14.717 11:59:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:15:14.717 11:59:20 -- common/autotest_common.sh@871 -- # break 00:15:14.717 11:59:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:14.717 11:59:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:14.717 11:59:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.717 1+0 records in 00:15:14.717 1+0 records out 00:15:14.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123006 s, 3.3 MB/s 00:15:14.717 11:59:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.717 11:59:20 -- common/autotest_common.sh@884 -- # size=4096 00:15:14.717 11:59:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.717 11:59:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:14.717 11:59:20 -- common/autotest_common.sh@887 -- # return 0 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.717 11:59:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:14.976 11:59:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd0", 00:15:14.976 "bdev_name": "Malloc0" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd1", 00:15:14.976 "bdev_name": "Malloc1p0" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd10", 00:15:14.976 "bdev_name": "Malloc1p1" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd11", 00:15:14.976 "bdev_name": "Malloc2p0" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd12", 00:15:14.976 "bdev_name": "Malloc2p1" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd13", 00:15:14.976 "bdev_name": "Malloc2p2" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd14", 00:15:14.976 "bdev_name": "Malloc2p3" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd15", 00:15:14.976 "bdev_name": "Malloc2p4" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd2", 00:15:14.976 "bdev_name": "Malloc2p5" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd3", 00:15:14.976 "bdev_name": "Malloc2p6" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd4", 00:15:14.976 "bdev_name": "Malloc2p7" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd5", 00:15:14.976 "bdev_name": "TestPT" 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "nbd_device": "/dev/nbd6", 00:15:14.976 "bdev_name": "raid0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd7", 00:15:14.977 "bdev_name": "concat0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd8", 00:15:14.977 "bdev_name": "raid1" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd9", 00:15:14.977 "bdev_name": "AIO0" 00:15:14.977 } 00:15:14.977 ]' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd0", 00:15:14.977 "bdev_name": "Malloc0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd1", 00:15:14.977 "bdev_name": "Malloc1p0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd10", 00:15:14.977 "bdev_name": "Malloc1p1" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd11", 00:15:14.977 "bdev_name": "Malloc2p0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd12", 00:15:14.977 "bdev_name": "Malloc2p1" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd13", 00:15:14.977 "bdev_name": "Malloc2p2" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd14", 00:15:14.977 "bdev_name": "Malloc2p3" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd15", 00:15:14.977 "bdev_name": "Malloc2p4" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd2", 00:15:14.977 "bdev_name": "Malloc2p5" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd3", 00:15:14.977 "bdev_name": "Malloc2p6" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd4", 00:15:14.977 "bdev_name": "Malloc2p7" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd5", 00:15:14.977 "bdev_name": "TestPT" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd6", 00:15:14.977 "bdev_name": "raid0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd7", 00:15:14.977 "bdev_name": "concat0" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd8", 00:15:14.977 "bdev_name": "raid1" 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "nbd_device": "/dev/nbd9", 00:15:14.977 "bdev_name": "AIO0" 00:15:14.977 } 00:15:14.977 ]' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:14.977 /dev/nbd1 00:15:14.977 /dev/nbd10 00:15:14.977 /dev/nbd11 00:15:14.977 /dev/nbd12 00:15:14.977 /dev/nbd13 00:15:14.977 /dev/nbd14 00:15:14.977 /dev/nbd15 00:15:14.977 /dev/nbd2 00:15:14.977 /dev/nbd3 00:15:14.977 /dev/nbd4 00:15:14.977 /dev/nbd5 00:15:14.977 /dev/nbd6 00:15:14.977 /dev/nbd7 00:15:14.977 /dev/nbd8 00:15:14.977 /dev/nbd9' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:14.977 /dev/nbd1 00:15:14.977 /dev/nbd10 00:15:14.977 /dev/nbd11 00:15:14.977 /dev/nbd12 00:15:14.977 /dev/nbd13 00:15:14.977 /dev/nbd14 00:15:14.977 /dev/nbd15 00:15:14.977 /dev/nbd2 00:15:14.977 /dev/nbd3 00:15:14.977 /dev/nbd4 00:15:14.977 /dev/nbd5 00:15:14.977 /dev/nbd6 00:15:14.977 /dev/nbd7 00:15:14.977 /dev/nbd8 00:15:14.977 /dev/nbd9' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@65 -- # count=16 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@66 -- # echo 16 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@95 -- # count=16 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:14.977 256+0 records in 00:15:14.977 256+0 records out 00:15:14.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113167 s, 92.7 MB/s 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.977 11:59:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:15.236 256+0 records in 00:15:15.236 256+0 records out 00:15:15.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149951 s, 7.0 MB/s 00:15:15.236 11:59:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.236 11:59:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:15.236 256+0 records in 00:15:15.236 256+0 records out 00:15:15.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149452 s, 7.0 MB/s 00:15:15.236 11:59:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.236 11:59:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:15.494 256+0 records in 00:15:15.494 256+0 records out 00:15:15.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153267 s, 6.8 MB/s 00:15:15.494 11:59:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.494 11:59:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:15.494 256+0 records in 00:15:15.494 256+0 records out 00:15:15.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151793 s, 6.9 MB/s 00:15:15.494 11:59:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.494 11:59:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:15.753 256+0 records in 00:15:15.753 256+0 records out 00:15:15.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146545 s, 7.2 MB/s 00:15:15.753 11:59:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.753 11:59:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:15.753 256+0 records in 00:15:15.753 256+0 records out 00:15:15.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14239 s, 7.4 MB/s 00:15:15.753 11:59:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.753 11:59:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:15:16.022 256+0 records in 00:15:16.022 256+0 records out 00:15:16.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15347 s, 6.8 MB/s 00:15:16.022 11:59:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.022 11:59:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:15:16.295 256+0 records in 00:15:16.295 256+0 records out 00:15:16.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145975 s, 7.2 MB/s 00:15:16.295 11:59:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.295 11:59:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:15:16.295 256+0 records in 00:15:16.295 256+0 records out 00:15:16.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146583 s, 7.2 MB/s 00:15:16.295 11:59:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.295 11:59:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:15:16.553 256+0 records in 00:15:16.553 256+0 records out 00:15:16.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155821 s, 6.7 MB/s 00:15:16.553 11:59:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.553 11:59:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:15:16.553 256+0 records in 00:15:16.553 256+0 records out 00:15:16.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150724 s, 7.0 MB/s 00:15:16.553 11:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.553 11:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:15:16.812 256+0 records in 00:15:16.812 256+0 records out 00:15:16.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151027 s, 6.9 MB/s 00:15:16.812 11:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.812 11:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:15:17.070 256+0 records in 00:15:17.070 256+0 records out 00:15:17.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155629 s, 6.7 MB/s 00:15:17.070 11:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.070 11:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:15:17.070 256+0 records in 00:15:17.070 256+0 records out 00:15:17.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158337 s, 6.6 MB/s 00:15:17.070 11:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.070 11:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:15:17.329 256+0 records in 00:15:17.329 256+0 records out 00:15:17.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160928 s, 6.5 MB/s 00:15:17.329 11:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.329 11:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:15:17.588 256+0 records in 00:15:17.588 256+0 records out 00:15:17.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206719 s, 5.1 MB/s 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.588 11:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@51 -- # local i 00:15:17.588 11:59:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.589 11:59:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@41 -- # break 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.848 11:59:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:18.414 11:59:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:18.414 11:59:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:18.414 11:59:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:18.414 11:59:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.414 11:59:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.415 11:59:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:18.415 11:59:23 -- bdev/nbd_common.sh@41 -- # break 00:15:18.415 11:59:23 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.415 11:59:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.415 11:59:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@41 -- # break 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.673 11:59:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@41 -- # break 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.932 11:59:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@41 -- # break 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.190 11:59:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@41 -- # break 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.448 11:59:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@41 -- # break 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.706 11:59:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:15:19.963 11:59:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:15:19.963 11:59:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:15:19.963 11:59:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@41 -- # break 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.964 11:59:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@41 -- # break 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.221 11:59:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@41 -- # break 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.480 11:59:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@41 -- # break 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.738 11:59:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@41 -- # break 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.996 11:59:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@41 -- # break 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.255 11:59:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@41 -- # break 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.513 11:59:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:15:21.771 11:59:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@41 -- # break 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.772 11:59:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@41 -- # break 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:22.030 11:59:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@65 -- # true 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@65 -- # count=0 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@104 -- # count=0 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@109 -- # return 0 00:15:22.289 11:59:27 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:22.289 11:59:27 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:22.548 malloc_lvol_verify 00:15:22.548 11:59:27 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:22.809 775253fe-ac12-4659-86eb-057d1ff53587 00:15:22.810 11:59:28 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:22.810 2116b386-89b3-48af-af5d-6355cc666df3 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:23.068 /dev/nbd0 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:23.068 mke2fs 1.46.5 (30-Dec-2021) 00:15:23.068 00:15:23.068 Filesystem too small for a journal 00:15:23.068 Discarding device blocks: 0/1024 done 00:15:23.068 Creating filesystem with 1024 4k blocks and 1024 inodes 00:15:23.068 00:15:23.068 Allocating group tables: 0/1 done 00:15:23.068 Writing inode tables: 0/1 done 00:15:23.068 Writing superblocks and filesystem accounting information: 0/1 done 00:15:23.068 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@51 -- # local i 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.068 11:59:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@41 -- # break 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:23.326 11:59:28 -- bdev/nbd_common.sh@147 -- # return 0 00:15:23.326 11:59:28 -- bdev/blockdev.sh@324 -- # killprocess 120269 00:15:23.326 11:59:28 -- common/autotest_common.sh@936 -- # '[' -z 120269 ']' 00:15:23.326 11:59:28 -- common/autotest_common.sh@940 -- # kill -0 120269 00:15:23.326 11:59:28 -- common/autotest_common.sh@941 -- # uname 00:15:23.326 11:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.326 11:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120269 00:15:23.326 11:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.326 11:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.326 11:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120269' 00:15:23.326 killing process with pid 120269 00:15:23.326 11:59:28 -- common/autotest_common.sh@955 -- # kill 120269 00:15:23.326 11:59:28 -- common/autotest_common.sh@960 -- # wait 120269 00:15:23.893 ************************************ 00:15:23.893 END TEST bdev_nbd 00:15:23.893 ************************************ 00:15:23.893 11:59:29 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:15:23.893 00:15:23.893 real 0m24.612s 00:15:23.893 user 0m34.723s 00:15:23.893 sys 0m9.356s 00:15:23.893 11:59:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:23.893 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:15:23.893 11:59:29 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:15:23.893 11:59:29 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.893 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:15:23.893 ************************************ 00:15:23.893 START TEST bdev_fio 00:15:23.893 ************************************ 00:15:23.893 11:59:29 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@329 -- # local env_context 00:15:23.893 11:59:29 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:23.893 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:23.893 11:59:29 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:23.893 11:59:29 -- bdev/blockdev.sh@337 -- # echo '' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:15:23.893 11:59:29 -- bdev/blockdev.sh@337 -- # env_context= 00:15:23.893 11:59:29 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:23.893 11:59:29 -- common/autotest_common.sh@1270 -- # local workload=verify 00:15:23.893 11:59:29 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:15:23.893 11:59:29 -- common/autotest_common.sh@1272 -- # local env_context= 00:15:23.893 11:59:29 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:15:23.893 11:59:29 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:23.893 11:59:29 -- common/autotest_common.sh@1290 -- # cat 00:15:23.893 11:59:29 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1303 -- # cat 00:15:23.893 11:59:29 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:15:23.893 11:59:29 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:23.893 11:59:29 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:15:23.893 11:59:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:23.893 11:59:29 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:15:23.893 11:59:29 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:23.893 11:59:29 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.893 11:59:29 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:15:23.893 11:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.893 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:15:23.893 ************************************ 00:15:23.893 START TEST bdev_fio_rw_verify 00:15:23.893 ************************************ 00:15:23.893 11:59:29 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.893 11:59:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.893 11:59:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:23.893 11:59:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.894 11:59:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:23.894 11:59:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.894 11:59:29 -- common/autotest_common.sh@1330 -- # shift 00:15:23.894 11:59:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:23.894 11:59:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.894 11:59:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.894 11:59:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:23.894 11:59:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:24.151 11:59:29 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:15:24.151 11:59:29 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:15:24.151 11:59:29 -- common/autotest_common.sh@1336 -- # break 00:15:24.151 11:59:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:24.151 11:59:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:24.151 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.151 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:24.152 fio-3.35 00:15:24.152 Starting 16 threads 00:15:36.435 00:15:36.435 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=121426: Fri Nov 29 11:59:40 2024 00:15:36.435 read: IOPS=78.7k, BW=307MiB/s (322MB/s)(3077MiB/10007msec) 00:15:36.435 slat (usec): min=2, max=59929, avg=34.11, stdev=442.77 00:15:36.435 clat (usec): min=9, max=60232, avg=279.73, stdev=1323.40 00:15:36.435 lat (usec): min=22, max=60256, avg=313.84, stdev=1394.73 00:15:36.435 clat percentiles (usec): 00:15:36.435 | 50.000th=[ 161], 99.000th=[ 676], 99.900th=[16319], 99.990th=[32113], 00:15:36.435 | 99.999th=[60031] 00:15:36.435 write: IOPS=125k, BW=487MiB/s (511MB/s)(4826MiB/9901msec); 0 zone resets 00:15:36.435 slat (usec): min=4, max=74243, avg=66.37, stdev=694.11 00:15:36.435 clat (usec): min=10, max=74610, avg=370.02, stdev=1567.68 00:15:36.435 lat (usec): min=38, max=74658, avg=436.39, stdev=1714.49 00:15:36.435 clat percentiles (usec): 00:15:36.435 | 50.000th=[ 208], 99.000th=[ 4686], 99.900th=[21627], 99.990th=[34866], 00:15:36.435 | 99.999th=[51119] 00:15:36.435 bw ( KiB/s): min=295832, max=803200, per=99.03%, avg=494346.29, stdev=8632.54, samples=305 00:15:36.435 iops : min=73958, max=200800, avg=123586.49, stdev=2158.13, samples=305 00:15:36.435 lat (usec) : 10=0.01%, 20=0.01%, 50=0.95%, 100=14.20%, 250=57.69% 00:15:36.435 lat (usec) : 500=24.17%, 750=1.67%, 1000=0.14% 00:15:36.435 lat (msec) : 2=0.14%, 4=0.10%, 10=0.23%, 20=0.59%, 50=0.11% 00:15:36.435 lat (msec) : 100=0.01% 00:15:36.435 cpu : usr=55.58%, sys=2.01%, ctx=216422, majf=2, minf=95885 00:15:36.435 IO depths : 1=11.4%, 2=23.9%, 4=51.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.435 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.435 issued rwts: total=787736,1235574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:36.435 00:15:36.435 Run status group 0 (all jobs): 00:15:36.435 READ: bw=307MiB/s (322MB/s), 307MiB/s-307MiB/s (322MB/s-322MB/s), io=3077MiB (3227MB), run=10007-10007msec 00:15:36.435 WRITE: bw=487MiB/s (511MB/s), 487MiB/s-487MiB/s (511MB/s-511MB/s), io=4826MiB (5061MB), run=9901-9901msec 00:15:36.435 ----------------------------------------------------- 00:15:36.435 Suppressions used: 00:15:36.435 count bytes template 00:15:36.435 16 140 /usr/src/fio/parse.c 00:15:36.435 9304 893184 /usr/src/fio/iolog.c 00:15:36.435 1 904 libcrypto.so 00:15:36.435 ----------------------------------------------------- 00:15:36.435 00:15:36.435 ************************************ 00:15:36.435 END TEST bdev_fio_rw_verify 00:15:36.435 ************************************ 00:15:36.435 00:15:36.435 real 0m11.857s 00:15:36.435 user 1m31.700s 00:15:36.435 sys 0m4.115s 00:15:36.435 11:59:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:36.435 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:15:36.435 11:59:41 -- bdev/blockdev.sh@348 -- # rm -f 00:15:36.435 11:59:41 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.435 11:59:41 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:36.435 11:59:41 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.435 11:59:41 -- common/autotest_common.sh@1270 -- # local workload=trim 00:15:36.435 11:59:41 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:15:36.435 11:59:41 -- common/autotest_common.sh@1272 -- # local env_context= 00:15:36.435 11:59:41 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:15:36.435 11:59:41 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:36.435 11:59:41 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:15:36.435 11:59:41 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:15:36.435 11:59:41 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.435 11:59:41 -- common/autotest_common.sh@1290 -- # cat 00:15:36.435 11:59:41 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:15:36.435 11:59:41 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:15:36.435 11:59:41 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:15:36.435 11:59:41 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:36.436 11:59:41 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "84513506-4942-474c-90e2-e8cb4c7a0c0b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "84513506-4942-474c-90e2-e8cb4c7a0c0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d573de8c-ab6c-5ccf-866b-fb1722633a6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d573de8c-ab6c-5ccf-866b-fb1722633a6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ca81c438-abe5-5726-8bbb-a219d3ba5991"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ca81c438-abe5-5726-8bbb-a219d3ba5991",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a954964b-bddb-5225-9a66-67afc2d6ad0d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a954964b-bddb-5225-9a66-67afc2d6ad0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "43352302-8556-5220-b3a3-524eea111706"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "43352302-8556-5220-b3a3-524eea111706",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "63568ebe-73c3-5dc0-987a-eee30ca32dd1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "63568ebe-73c3-5dc0-987a-eee30ca32dd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "56be882d-5264-5ff6-9f94-b57d6413ba7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56be882d-5264-5ff6-9f94-b57d6413ba7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cfaadef2-4e67-5c48-9618-d7167188174a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfaadef2-4e67-5c48-9618-d7167188174a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "9d2dd6b6-402f-5c20-913f-29300130f9f8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d2dd6b6-402f-5c20-913f-29300130f9f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "73384634-feaf-5416-82ce-dcc6389f1b52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "73384634-feaf-5416-82ce-dcc6389f1b52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "10ac1d90-abc3-55b0-ac5a-3d0da1e003d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10ac1d90-abc3-55b0-ac5a-3d0da1e003d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a24ebb49-38d3-53e0-8511-84a240bea5a4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a24ebb49-38d3-53e0-8511-84a240bea5a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "989732c5-784e-44b4-8bad-2d2228f4ed25",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "5829174b-fdb8-4339-8657-affcacca4d06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "8afa122a-ffac-4d9f-a813-dfea4dded088"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8afa122a-ffac-4d9f-a813-dfea4dded088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8afa122a-ffac-4d9f-a813-dfea4dded088",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "dfec51ab-7035-447e-8cdc-a2f0a3250877",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f3b40517-97fc-420d-b3ad-206b8c81b058",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "da6e5b36-1ff7-45bf-8d5a-afcb42524f12"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "da6e5b36-1ff7-45bf-8d5a-afcb42524f12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "da6e5b36-1ff7-45bf-8d5a-afcb42524f12",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "79a2589d-fdec-4786-bf79-a13206608e80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "32b07c2c-090c-4ed8-8b63-f6e0ffc46861",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "33e61c46-46d4-49b5-a4d4-4367a84abc64"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "33e61c46-46d4-49b5-a4d4-4367a84abc64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:15:36.436 11:59:41 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:15:36.436 Malloc1p0 00:15:36.436 Malloc1p1 00:15:36.436 Malloc2p0 00:15:36.436 Malloc2p1 00:15:36.436 Malloc2p2 00:15:36.436 Malloc2p3 00:15:36.436 Malloc2p4 00:15:36.436 Malloc2p5 00:15:36.436 Malloc2p6 00:15:36.436 Malloc2p7 00:15:36.436 TestPT 00:15:36.436 raid0 00:15:36.436 concat0 ]] 00:15:36.436 11:59:41 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:36.437 11:59:41 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "84513506-4942-474c-90e2-e8cb4c7a0c0b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "84513506-4942-474c-90e2-e8cb4c7a0c0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d573de8c-ab6c-5ccf-866b-fb1722633a6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d573de8c-ab6c-5ccf-866b-fb1722633a6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ca81c438-abe5-5726-8bbb-a219d3ba5991"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ca81c438-abe5-5726-8bbb-a219d3ba5991",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a954964b-bddb-5225-9a66-67afc2d6ad0d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a954964b-bddb-5225-9a66-67afc2d6ad0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "43352302-8556-5220-b3a3-524eea111706"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "43352302-8556-5220-b3a3-524eea111706",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "63568ebe-73c3-5dc0-987a-eee30ca32dd1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "63568ebe-73c3-5dc0-987a-eee30ca32dd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "56be882d-5264-5ff6-9f94-b57d6413ba7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56be882d-5264-5ff6-9f94-b57d6413ba7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cfaadef2-4e67-5c48-9618-d7167188174a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfaadef2-4e67-5c48-9618-d7167188174a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "9d2dd6b6-402f-5c20-913f-29300130f9f8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d2dd6b6-402f-5c20-913f-29300130f9f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "73384634-feaf-5416-82ce-dcc6389f1b52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "73384634-feaf-5416-82ce-dcc6389f1b52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "10ac1d90-abc3-55b0-ac5a-3d0da1e003d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10ac1d90-abc3-55b0-ac5a-3d0da1e003d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a24ebb49-38d3-53e0-8511-84a240bea5a4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a24ebb49-38d3-53e0-8511-84a240bea5a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0bd5b4e1-e579-46f5-a62e-530b0ff7f4b9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "989732c5-784e-44b4-8bad-2d2228f4ed25",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "5829174b-fdb8-4339-8657-affcacca4d06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "8afa122a-ffac-4d9f-a813-dfea4dded088"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8afa122a-ffac-4d9f-a813-dfea4dded088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8afa122a-ffac-4d9f-a813-dfea4dded088",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "dfec51ab-7035-447e-8cdc-a2f0a3250877",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f3b40517-97fc-420d-b3ad-206b8c81b058",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "da6e5b36-1ff7-45bf-8d5a-afcb42524f12"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "da6e5b36-1ff7-45bf-8d5a-afcb42524f12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "da6e5b36-1ff7-45bf-8d5a-afcb42524f12",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "79a2589d-fdec-4786-bf79-a13206608e80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "32b07c2c-090c-4ed8-8b63-f6e0ffc46861",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "33e61c46-46d4-49b5-a4d4-4367a84abc64"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "33e61c46-46d4-49b5-a4d4-4367a84abc64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:15:36.437 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.437 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:15:36.437 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:15:36.437 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.437 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:15:36.437 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:15:36.437 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.437 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:15:36.437 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:15:36.438 11:59:41 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:36.438 11:59:41 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:15:36.438 11:59:41 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:15:36.438 11:59:41 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:36.438 11:59:41 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:15:36.438 11:59:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.438 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:15:36.438 ************************************ 00:15:36.438 START TEST bdev_fio_trim 00:15:36.438 ************************************ 00:15:36.438 11:59:41 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:36.438 11:59:41 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:36.438 11:59:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:36.438 11:59:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.438 11:59:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:36.438 11:59:41 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.438 11:59:41 -- common/autotest_common.sh@1330 -- # shift 00:15:36.438 11:59:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:36.438 11:59:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.438 11:59:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:36.438 11:59:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.438 11:59:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:36.438 11:59:41 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:15:36.438 11:59:41 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:15:36.438 11:59:41 -- common/autotest_common.sh@1336 -- # break 00:15:36.438 11:59:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.438 11:59:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:36.438 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:36.438 fio-3.35 00:15:36.438 Starting 14 threads 00:15:48.643 00:15:48.643 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=121627: Fri Nov 29 11:59:52 2024 00:15:48.643 write: IOPS=126k, BW=493MiB/s (516MB/s)(4926MiB/10002msec); 0 zone resets 00:15:48.643 slat (usec): min=2, max=28055, avg=39.84, stdev=408.77 00:15:48.643 clat (usec): min=25, max=38423, avg=280.93, stdev=1139.85 00:15:48.643 lat (usec): min=37, max=38474, avg=320.76, stdev=1210.11 00:15:48.643 clat percentiles (usec): 00:15:48.643 | 50.000th=[ 190], 99.000th=[ 478], 99.900th=[16319], 99.990th=[20841], 00:15:48.643 | 99.999th=[28181] 00:15:48.643 bw ( KiB/s): min=335413, max=720424, per=100.00%, avg=505953.74, stdev=8856.05, samples=266 00:15:48.643 iops : min=83853, max=180106, avg=126488.47, stdev=2214.02, samples=266 00:15:48.643 trim: IOPS=126k, BW=493MiB/s (516MB/s)(4926MiB/10002msec); 0 zone resets 00:15:48.643 slat (usec): min=4, max=28776, avg=27.37, stdev=353.17 00:15:48.643 clat (usec): min=4, max=38474, avg=304.44, stdev=1156.45 00:15:48.643 lat (usec): min=13, max=38498, avg=331.82, stdev=1208.68 00:15:48.643 clat percentiles (usec): 00:15:48.643 | 50.000th=[ 215], 99.000th=[ 424], 99.900th=[16319], 99.990th=[22152], 00:15:48.643 | 99.999th=[28181] 00:15:48.643 bw ( KiB/s): min=335413, max=720424, per=100.00%, avg=505954.58, stdev=8856.41, samples=266 00:15:48.643 iops : min=83853, max=180106, avg=126488.58, stdev=2214.11, samples=266 00:15:48.643 lat (usec) : 10=0.06%, 20=0.20%, 50=1.01%, 100=6.22%, 250=62.55% 00:15:48.643 lat (usec) : 500=29.14%, 750=0.21%, 1000=0.01% 00:15:48.643 lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%, 20=0.51%, 50=0.02% 00:15:48.643 cpu : usr=69.13%, sys=0.48%, ctx=172521, majf=0, minf=8962 00:15:48.643 IO depths : 1=12.3%, 2=24.7%, 4=50.1%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:48.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.643 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.643 issued rwts: total=0,1261072,1261078,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:48.643 00:15:48.643 Run status group 0 (all jobs): 00:15:48.643 WRITE: bw=493MiB/s (516MB/s), 493MiB/s-493MiB/s (516MB/s-516MB/s), io=4926MiB (5165MB), run=10002-10002msec 00:15:48.643 TRIM: bw=493MiB/s (516MB/s), 493MiB/s-493MiB/s (516MB/s-516MB/s), io=4926MiB (5165MB), run=10002-10002msec 00:15:48.643 ----------------------------------------------------- 00:15:48.643 Suppressions used: 00:15:48.643 count bytes template 00:15:48.643 14 129 /usr/src/fio/parse.c 00:15:48.643 1 904 libcrypto.so 00:15:48.643 ----------------------------------------------------- 00:15:48.643 00:15:48.643 ************************************ 00:15:48.643 END TEST bdev_fio_trim 00:15:48.643 ************************************ 00:15:48.643 00:15:48.643 real 0m11.640s 00:15:48.643 user 1m39.313s 00:15:48.643 sys 0m1.457s 00:15:48.643 11:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.643 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:48.643 11:59:53 -- bdev/blockdev.sh@366 -- # rm -f 00:15:48.643 11:59:53 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:48.643 11:59:53 -- bdev/blockdev.sh@368 -- # popd 00:15:48.643 /home/vagrant/spdk_repo/spdk 00:15:48.643 11:59:53 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:15:48.643 00:15:48.643 real 0m23.878s 00:15:48.643 user 3m11.245s 00:15:48.643 sys 0m5.676s 00:15:48.643 11:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.643 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:48.643 ************************************ 00:15:48.643 END TEST bdev_fio 00:15:48.643 ************************************ 00:15:48.643 11:59:53 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:48.643 11:59:53 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:48.643 11:59:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:15:48.644 11:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.644 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:48.644 ************************************ 00:15:48.644 START TEST bdev_verify 00:15:48.644 ************************************ 00:15:48.644 11:59:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:48.644 [2024-11-29 11:59:53.241975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:48.644 [2024-11-29 11:59:53.242244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121797 ] 00:15:48.644 [2024-11-29 11:59:53.385854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.644 [2024-11-29 11:59:53.450178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.644 [2024-11-29 11:59:53.450182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.644 [2024-11-29 11:59:53.596787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:48.644 [2024-11-29 11:59:53.596938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:48.644 [2024-11-29 11:59:53.604703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:48.644 [2024-11-29 11:59:53.604802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:48.644 [2024-11-29 11:59:53.612788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:48.644 [2024-11-29 11:59:53.612869] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:48.644 [2024-11-29 11:59:53.612916] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:48.644 [2024-11-29 11:59:53.712767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:48.644 [2024-11-29 11:59:53.712923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.644 [2024-11-29 11:59:53.713009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.644 [2024-11-29 11:59:53.713037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.644 [2024-11-29 11:59:53.716100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.644 [2024-11-29 11:59:53.716208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:48.644 Running I/O for 5 seconds... 00:15:53.946 00:15:53.946 Latency(us) 00:15:53.946 [2024-11-29T11:59:59.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x0 length 0x1000 00:15:53.946 Malloc0 : 5.22 1462.54 5.71 0.00 0.00 86847.65 1839.48 240219.23 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x1000 length 0x1000 00:15:53.946 Malloc0 : 5.21 1440.06 5.63 0.00 0.00 88244.07 2219.29 318385.80 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x0 length 0x800 00:15:53.946 Malloc1p0 : 5.22 1021.51 3.99 0.00 0.00 124213.51 4527.94 149660.39 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x800 length 0x800 00:15:53.946 Malloc1p0 : 5.21 1023.09 4.00 0.00 0.00 124002.54 4617.31 149660.39 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x0 length 0x800 00:15:53.946 Malloc1p1 : 5.23 1020.62 3.99 0.00 0.00 124060.71 4200.26 144894.14 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x800 length 0x800 00:15:53.946 Malloc1p1 : 5.22 1022.77 4.00 0.00 0.00 123823.17 4230.05 145847.39 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x0 length 0x200 00:15:53.946 Malloc2p0 : 5.23 1019.87 3.98 0.00 0.00 123923.46 4110.89 141081.13 00:15:53.946 [2024-11-29T11:59:59.457Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.946 Verification LBA range: start 0x200 length 0x200 00:15:53.946 Malloc2p0 : 5.22 1022.49 3.99 0.00 0.00 123656.06 4110.89 141081.13 00:15:53.946 [2024-11-29T11:59:59.458Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p1 : 5.24 1019.09 3.98 0.00 0.00 123796.55 4587.52 135361.63 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p1 : 5.22 1022.15 3.99 0.00 0.00 123492.52 4617.31 136314.88 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p2 : 5.24 1018.20 3.98 0.00 0.00 123663.33 4349.21 130595.37 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p2 : 5.22 1021.48 3.99 0.00 0.00 123332.89 4438.57 131548.63 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p3 : 5.24 1017.44 3.97 0.00 0.00 123518.33 4468.36 126782.37 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p3 : 5.23 1020.66 3.99 0.00 0.00 123191.15 4527.94 126782.37 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p4 : 5.25 1016.79 3.97 0.00 0.00 123380.45 4498.15 126782.37 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p4 : 5.23 1020.07 3.98 0.00 0.00 123051.92 4557.73 123922.62 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p5 : 5.25 1016.02 3.97 0.00 0.00 123237.02 4289.63 126782.37 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p5 : 5.23 1019.66 3.98 0.00 0.00 122900.20 4289.63 124875.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p6 : 5.25 1015.37 3.97 0.00 0.00 123109.72 4498.15 127735.62 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p6 : 5.24 1019.05 3.98 0.00 0.00 122755.42 4498.15 124875.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x200 00:15:53.947 Malloc2p7 : 5.26 1014.61 3.96 0.00 0.00 122955.49 4379.00 127735.62 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x200 length 0x200 00:15:53.947 Malloc2p7 : 5.24 1018.21 3.98 0.00 0.00 122588.41 4408.79 125829.12 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x1000 00:15:53.947 TestPT : 5.26 1001.06 3.91 0.00 0.00 124387.16 8519.68 128688.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x1000 length 0x1000 00:15:53.947 TestPT : 5.24 1001.29 3.91 0.00 0.00 124440.48 43134.60 125829.12 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x2000 00:15:53.947 raid0 : 5.26 1014.12 3.96 0.00 0.00 122550.77 4230.05 128688.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x2000 length 0x2000 00:15:53.947 raid0 : 5.25 1016.99 3.97 0.00 0.00 122277.90 4081.11 125829.12 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x2000 00:15:53.947 concat0 : 5.26 1013.86 3.96 0.00 0.00 122387.82 4170.47 128688.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x2000 length 0x2000 00:15:53.947 concat0 : 5.25 1016.50 3.97 0.00 0.00 122135.27 4259.84 124875.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x1000 00:15:53.947 raid1 : 5.26 1013.61 3.96 0.00 0.00 122201.03 4825.83 128688.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x1000 length 0x1000 00:15:53.947 raid1 : 5.25 1015.94 3.97 0.00 0.00 122003.12 4885.41 124875.87 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x0 length 0x4e2 00:15:53.947 AIO0 : 5.26 1013.20 3.96 0.00 0.00 121899.17 8877.15 126782.37 00:15:53.947 [2024-11-29T11:59:59.458Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.947 Verification LBA range: start 0x4e2 length 0x4e2 00:15:53.947 AIO0 : 5.25 1015.17 3.97 0.00 0.00 121736.28 8877.15 123922.62 00:15:53.947 [2024-11-29T11:59:59.458Z] =================================================================================================================== 00:15:53.947 [2024-11-29T11:59:59.458Z] Total : 33413.47 130.52 0.00 0.00 120074.13 1839.48 318385.80 00:15:54.514 ************************************ 00:15:54.514 END TEST bdev_verify 00:15:54.514 ************************************ 00:15:54.514 00:15:54.514 real 0m6.562s 00:15:54.514 user 0m11.271s 00:15:54.514 sys 0m0.581s 00:15:54.514 11:59:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:54.514 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:15:54.514 11:59:59 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:54.514 11:59:59 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:15:54.514 11:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.514 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:15:54.514 ************************************ 00:15:54.514 START TEST bdev_verify_big_io 00:15:54.514 ************************************ 00:15:54.514 11:59:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:54.514 [2024-11-29 11:59:59.850036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:54.514 [2024-11-29 11:59:59.850281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121905 ] 00:15:54.514 [2024-11-29 12:00:00.002072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.771 [2024-11-29 12:00:00.084433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.771 [2024-11-29 12:00:00.084437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.771 [2024-11-29 12:00:00.229614] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:54.771 [2024-11-29 12:00:00.229741] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:54.771 [2024-11-29 12:00:00.237533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:54.771 [2024-11-29 12:00:00.237654] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:54.771 [2024-11-29 12:00:00.245592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:54.771 [2024-11-29 12:00:00.245690] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:54.771 [2024-11-29 12:00:00.245739] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:55.029 [2024-11-29 12:00:00.343418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:55.029 [2024-11-29 12:00:00.343634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.029 [2024-11-29 12:00:00.343726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.029 [2024-11-29 12:00:00.343776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.029 [2024-11-29 12:00:00.347046] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.029 [2024-11-29 12:00:00.347112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:55.029 [2024-11-29 12:00:00.526712] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.527919] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.529621] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.531321] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.532497] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.534175] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.535347] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.536961] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.538110] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.539840] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:15:55.029 [2024-11-29 12:00:00.540946] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:15:55.288 [2024-11-29 12:00:00.542624] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:15:55.288 [2024-11-29 12:00:00.543774] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:15:55.288 [2024-11-29 12:00:00.545478] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:15:55.288 [2024-11-29 12:00:00.547198] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:15:55.288 [2024-11-29 12:00:00.548386] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:15:55.288 [2024-11-29 12:00:00.576206] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:55.288 [2024-11-29 12:00:00.578601] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:55.288 Running I/O for 5 seconds... 00:16:01.845 00:16:01.845 Latency(us) 00:16:01.845 [2024-11-29T12:00:07.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.845 [2024-11-29T12:00:07.356Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.845 Verification LBA range: start 0x0 length 0x100 00:16:01.845 Malloc0 : 5.48 412.91 25.81 0.00 0.00 300463.68 19065.02 876990.84 00:16:01.845 [2024-11-29T12:00:07.356Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.845 Verification LBA range: start 0x100 length 0x100 00:16:01.845 Malloc0 : 5.52 388.50 24.28 0.00 0.00 320317.88 18826.71 964689.92 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x80 00:16:01.846 Malloc1p0 : 5.78 131.12 8.20 0.00 0.00 919887.44 45041.11 1822615.74 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x80 length 0x80 00:16:01.846 Malloc1p0 : 5.52 298.09 18.63 0.00 0.00 413587.03 35270.28 629145.60 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x80 00:16:01.846 Malloc1p1 : 5.80 137.19 8.57 0.00 0.00 875808.25 38130.04 1814989.73 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x80 length 0x80 00:16:01.846 Malloc1p1 : 5.64 178.84 11.18 0.00 0.00 673174.11 34078.72 1639591.56 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p0 : 5.58 76.83 4.80 0.00 0.00 389966.63 7000.44 674901.64 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p0 : 5.58 76.83 4.80 0.00 0.00 395891.60 6315.29 556698.53 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p1 : 5.59 76.76 4.80 0.00 0.00 388427.39 7179.17 659649.63 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p1 : 5.59 76.76 4.80 0.00 0.00 394637.96 6225.92 541446.52 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p2 : 5.59 76.71 4.79 0.00 0.00 386881.69 7149.38 644397.61 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p2 : 5.59 76.70 4.79 0.00 0.00 393337.55 6047.19 530007.51 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p3 : 5.60 76.66 4.79 0.00 0.00 385375.43 6970.65 632958.60 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p3 : 5.60 76.65 4.79 0.00 0.00 392119.36 6553.60 518568.49 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p4 : 5.65 79.78 4.99 0.00 0.00 371669.77 7447.27 617706.59 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p4 : 5.60 76.60 4.79 0.00 0.00 390718.58 6345.08 507129.48 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p5 : 5.65 79.77 4.99 0.00 0.00 370061.29 7238.75 602454.57 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p5 : 5.60 76.58 4.79 0.00 0.00 389526.98 6136.55 495690.47 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p6 : 5.66 79.75 4.98 0.00 0.00 368640.99 5957.82 591015.56 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p6 : 5.60 76.56 4.79 0.00 0.00 388278.44 6374.87 484251.46 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x20 00:16:01.846 Malloc2p7 : 5.66 79.73 4.98 0.00 0.00 367085.29 7328.12 575763.55 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x20 length 0x20 00:16:01.846 Malloc2p7 : 5.60 76.54 4.78 0.00 0.00 386954.15 6732.33 472812.45 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x100 00:16:01.846 TestPT : 5.75 144.27 9.02 0.00 0.00 800363.95 39083.29 1814989.73 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x100 length 0x100 00:16:01.846 TestPT : 5.72 133.69 8.36 0.00 0.00 875157.85 45994.36 2074273.98 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x200 00:16:01.846 raid0 : 5.80 147.81 9.24 0.00 0.00 768708.95 40513.16 1830241.75 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x200 length 0x200 00:16:01.846 raid0 : 5.72 139.07 8.69 0.00 0.00 834067.60 35746.91 2028517.93 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x200 00:16:01.846 concat0 : 5.82 152.63 9.54 0.00 0.00 733402.19 34793.66 1837867.75 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x200 length 0x200 00:16:01.846 concat0 : 5.73 144.65 9.04 0.00 0.00 794131.31 35985.22 2028517.93 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x100 00:16:01.846 raid1 : 5.81 178.21 11.14 0.00 0.00 622738.44 18588.39 1853119.77 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x100 length 0x100 00:16:01.846 raid1 : 5.77 148.83 9.30 0.00 0.00 757375.97 19422.49 2043769.95 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x0 length 0x4e 00:16:01.846 AIO0 : 5.84 178.55 11.16 0.00 0.00 373651.02 990.49 1067641.02 00:16:01.846 [2024-11-29T12:00:07.357Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:16:01.846 Verification LBA range: start 0x4e length 0x4e 00:16:01.846 AIO0 : 5.73 153.47 9.59 0.00 0.00 445641.92 8102.63 1243039.19 00:16:01.846 [2024-11-29T12:00:07.357Z] =================================================================================================================== 00:16:01.846 [2024-11-29T12:00:07.357Z] Total : 4307.04 269.19 0.00 0.00 527294.16 990.49 2074273.98 00:16:01.846 00:16:01.846 real 0m7.153s 00:16:01.846 user 0m13.052s 00:16:01.846 sys 0m0.502s 00:16:01.846 12:00:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:01.846 ************************************ 00:16:01.847 END TEST bdev_verify_big_io 00:16:01.847 ************************************ 00:16:01.847 12:00:06 -- common/autotest_common.sh@10 -- # set +x 00:16:01.847 12:00:06 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.847 12:00:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:01.847 12:00:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.847 12:00:06 -- common/autotest_common.sh@10 -- # set +x 00:16:01.847 ************************************ 00:16:01.847 START TEST bdev_write_zeroes 00:16:01.847 ************************************ 00:16:01.847 12:00:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.847 [2024-11-29 12:00:07.064898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:01.847 [2024-11-29 12:00:07.065194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122013 ] 00:16:01.847 [2024-11-29 12:00:07.215782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.847 [2024-11-29 12:00:07.343277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.104 [2024-11-29 12:00:07.528198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:02.104 [2024-11-29 12:00:07.528323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:02.104 [2024-11-29 12:00:07.536097] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:02.104 [2024-11-29 12:00:07.536183] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:02.104 [2024-11-29 12:00:07.544142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:02.104 [2024-11-29 12:00:07.544210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:16:02.104 [2024-11-29 12:00:07.544279] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:16:02.362 [2024-11-29 12:00:07.658813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:02.362 [2024-11-29 12:00:07.658975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.362 [2024-11-29 12:00:07.659038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.362 [2024-11-29 12:00:07.659085] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.362 [2024-11-29 12:00:07.662251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.362 [2024-11-29 12:00:07.662336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:16:02.620 Running I/O for 1 seconds... 00:16:03.555 00:16:03.555 Latency(us) 00:16:03.555 [2024-11-29T12:00:09.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc0 : 1.04 4780.06 18.67 0.00 0.00 26763.42 856.44 48377.48 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc1p0 : 1.05 4772.83 18.64 0.00 0.00 26745.89 1124.54 47185.92 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc1p1 : 1.05 4766.61 18.62 0.00 0.00 26719.35 1146.88 46232.67 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p0 : 1.05 4759.81 18.59 0.00 0.00 26687.63 1154.33 45041.11 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p1 : 1.05 4753.34 18.57 0.00 0.00 26665.91 1169.22 44087.85 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p2 : 1.05 4747.26 18.54 0.00 0.00 26629.44 1154.33 42896.29 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p3 : 1.05 4741.00 18.52 0.00 0.00 26606.41 1161.77 41943.04 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p4 : 1.05 4734.48 18.49 0.00 0.00 26572.58 1124.54 40751.48 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p5 : 1.06 4728.43 18.47 0.00 0.00 26545.94 1184.12 39798.23 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p6 : 1.06 4721.92 18.44 0.00 0.00 26513.07 1139.43 38606.66 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 Malloc2p7 : 1.06 4715.39 18.42 0.00 0.00 26481.99 1258.59 37415.10 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 TestPT : 1.06 4709.34 18.40 0.00 0.00 26447.73 1176.67 36223.53 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 raid0 : 1.06 4702.02 18.37 0.00 0.00 26405.83 1966.08 34317.03 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 concat0 : 1.06 4695.28 18.34 0.00 0.00 26336.40 1891.61 32648.84 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 raid1 : 1.07 4780.92 18.68 0.00 0.00 25748.99 2978.91 29789.09 00:16:03.555 [2024-11-29T12:00:09.066Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.555 AIO0 : 1.07 4769.52 18.63 0.00 0.00 25666.02 1608.61 29550.78 00:16:03.555 [2024-11-29T12:00:09.066Z] =================================================================================================================== 00:16:03.555 [2024-11-29T12:00:09.066Z] Total : 75878.21 296.40 0.00 0.00 26468.67 856.44 48377.48 00:16:04.122 00:16:04.122 real 0m2.543s 00:16:04.122 user 0m1.904s 00:16:04.122 sys 0m0.440s 00:16:04.122 12:00:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:04.122 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:04.122 ************************************ 00:16:04.122 END TEST bdev_write_zeroes 00:16:04.122 ************************************ 00:16:04.122 12:00:09 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.122 12:00:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:04.122 12:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.122 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:04.122 ************************************ 00:16:04.122 START TEST bdev_json_nonenclosed 00:16:04.122 ************************************ 00:16:04.122 12:00:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.380 [2024-11-29 12:00:09.674292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:04.380 [2024-11-29 12:00:09.674595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122063 ] 00:16:04.380 [2024-11-29 12:00:09.826108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.637 [2024-11-29 12:00:09.941785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.637 [2024-11-29 12:00:09.942056] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:04.637 [2024-11-29 12:00:09.942102] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:04.637 00:16:04.637 real 0m0.469s 00:16:04.637 user 0m0.245s 00:16:04.637 sys 0m0.121s 00:16:04.637 12:00:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:04.637 ************************************ 00:16:04.637 END TEST bdev_json_nonenclosed 00:16:04.637 ************************************ 00:16:04.637 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:16:04.637 12:00:10 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.637 12:00:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:04.637 12:00:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.637 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:16:04.637 ************************************ 00:16:04.637 START TEST bdev_json_nonarray 00:16:04.637 ************************************ 00:16:04.638 12:00:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.895 [2024-11-29 12:00:10.188435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:04.895 [2024-11-29 12:00:10.189251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122101 ] 00:16:04.895 [2024-11-29 12:00:10.338725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.154 [2024-11-29 12:00:10.429214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.154 [2024-11-29 12:00:10.429517] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:05.154 [2024-11-29 12:00:10.429561] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:05.154 00:16:05.154 real 0m0.413s 00:16:05.154 user 0m0.216s 00:16:05.154 sys 0m0.097s 00:16:05.154 12:00:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:05.154 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:16:05.154 ************************************ 00:16:05.154 END TEST bdev_json_nonarray 00:16:05.154 ************************************ 00:16:05.154 12:00:10 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:16:05.154 12:00:10 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:16:05.154 12:00:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:05.154 12:00:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:05.154 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:16:05.154 ************************************ 00:16:05.154 START TEST bdev_qos 00:16:05.154 ************************************ 00:16:05.154 12:00:10 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:16:05.154 12:00:10 -- bdev/blockdev.sh@444 -- # QOS_PID=122123 00:16:05.154 Process qos testing pid: 122123 00:16:05.154 12:00:10 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 122123' 00:16:05.154 12:00:10 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:16:05.154 12:00:10 -- bdev/blockdev.sh@447 -- # waitforlisten 122123 00:16:05.154 12:00:10 -- common/autotest_common.sh@829 -- # '[' -z 122123 ']' 00:16:05.154 12:00:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.154 12:00:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.154 12:00:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.154 12:00:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.154 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:16:05.154 12:00:10 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:16:05.154 [2024-11-29 12:00:10.652040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:05.154 [2024-11-29 12:00:10.652555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122123 ] 00:16:05.413 [2024-11-29 12:00:10.803400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.413 [2024-11-29 12:00:10.924215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.345 12:00:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.345 12:00:11 -- common/autotest_common.sh@862 -- # return 0 00:16:06.345 12:00:11 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:16:06.345 12:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.345 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.345 Malloc_0 00:16:06.345 12:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 12:00:11 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:16:06.346 12:00:11 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:16:06.346 12:00:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.346 12:00:11 -- common/autotest_common.sh@899 -- # local i 00:16:06.346 12:00:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.346 12:00:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.346 12:00:11 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:06.346 12:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.346 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 12:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 12:00:11 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:16:06.346 12:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.346 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 [ 00:16:06.346 { 00:16:06.346 "name": "Malloc_0", 00:16:06.346 "aliases": [ 00:16:06.346 "ffe6fbe6-b5ce-4739-ada6-8416df8be093" 00:16:06.346 ], 00:16:06.346 "product_name": "Malloc disk", 00:16:06.346 "block_size": 512, 00:16:06.346 "num_blocks": 262144, 00:16:06.346 "uuid": "ffe6fbe6-b5ce-4739-ada6-8416df8be093", 00:16:06.346 "assigned_rate_limits": { 00:16:06.346 "rw_ios_per_sec": 0, 00:16:06.346 "rw_mbytes_per_sec": 0, 00:16:06.346 "r_mbytes_per_sec": 0, 00:16:06.346 "w_mbytes_per_sec": 0 00:16:06.346 }, 00:16:06.346 "claimed": false, 00:16:06.346 "zoned": false, 00:16:06.346 "supported_io_types": { 00:16:06.346 "read": true, 00:16:06.346 "write": true, 00:16:06.346 "unmap": true, 00:16:06.346 "write_zeroes": true, 00:16:06.346 "flush": true, 00:16:06.346 "reset": true, 00:16:06.346 "compare": false, 00:16:06.346 "compare_and_write": false, 00:16:06.346 "abort": true, 00:16:06.346 "nvme_admin": false, 00:16:06.346 "nvme_io": false 00:16:06.346 }, 00:16:06.346 "memory_domains": [ 00:16:06.346 { 00:16:06.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.346 "dma_device_type": 2 00:16:06.346 } 00:16:06.346 ], 00:16:06.346 "driver_specific": {} 00:16:06.346 } 00:16:06.346 ] 00:16:06.346 12:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 12:00:11 -- common/autotest_common.sh@905 -- # return 0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:16:06.346 12:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.346 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 Null_1 00:16:06.346 12:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 12:00:11 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:16:06.346 12:00:11 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:16:06.346 12:00:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.346 12:00:11 -- common/autotest_common.sh@899 -- # local i 00:16:06.346 12:00:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.346 12:00:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.346 12:00:11 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:06.346 12:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.346 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 12:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 12:00:11 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:16:06.346 12:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.346 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 [ 00:16:06.346 { 00:16:06.346 "name": "Null_1", 00:16:06.346 "aliases": [ 00:16:06.346 "eee00b0a-2a04-4c4c-b552-0066ab269bd8" 00:16:06.346 ], 00:16:06.346 "product_name": "Null disk", 00:16:06.346 "block_size": 512, 00:16:06.346 "num_blocks": 262144, 00:16:06.346 "uuid": "eee00b0a-2a04-4c4c-b552-0066ab269bd8", 00:16:06.346 "assigned_rate_limits": { 00:16:06.346 "rw_ios_per_sec": 0, 00:16:06.346 "rw_mbytes_per_sec": 0, 00:16:06.346 "r_mbytes_per_sec": 0, 00:16:06.346 "w_mbytes_per_sec": 0 00:16:06.346 }, 00:16:06.346 "claimed": false, 00:16:06.346 "zoned": false, 00:16:06.346 "supported_io_types": { 00:16:06.346 "read": true, 00:16:06.346 "write": true, 00:16:06.346 "unmap": false, 00:16:06.346 "write_zeroes": true, 00:16:06.346 "flush": false, 00:16:06.346 "reset": true, 00:16:06.346 "compare": false, 00:16:06.346 "compare_and_write": false, 00:16:06.346 "abort": true, 00:16:06.346 "nvme_admin": false, 00:16:06.346 "nvme_io": false 00:16:06.346 }, 00:16:06.346 "driver_specific": {} 00:16:06.346 } 00:16:06.346 ] 00:16:06.346 12:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 12:00:11 -- common/autotest_common.sh@905 -- # return 0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@455 -- # qos_function_test 00:16:06.346 12:00:11 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:06.346 12:00:11 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:16:06.346 12:00:11 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:16:06.346 12:00:11 -- bdev/blockdev.sh@410 -- # local io_result=0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:16:06.346 12:00:11 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:06.346 12:00:11 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:06.346 12:00:11 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:16:06.346 12:00:11 -- bdev/blockdev.sh@376 -- # tail -1 00:16:06.604 Running I/O for 60 seconds... 00:16:11.867 12:00:16 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 68019.68 272078.74 0.00 0.00 275456.00 0.00 0.00 ' 00:16:11.867 12:00:16 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:16:11.867 12:00:16 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:16:11.867 12:00:16 -- bdev/blockdev.sh@378 -- # iostat_result=68019.68 00:16:11.867 12:00:16 -- bdev/blockdev.sh@383 -- # echo 68019 00:16:11.867 12:00:16 -- bdev/blockdev.sh@414 -- # io_result=68019 00:16:11.867 12:00:16 -- bdev/blockdev.sh@416 -- # iops_limit=17000 00:16:11.867 12:00:16 -- bdev/blockdev.sh@417 -- # '[' 17000 -gt 1000 ']' 00:16:11.867 12:00:16 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:16:11.867 12:00:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.867 12:00:16 -- common/autotest_common.sh@10 -- # set +x 00:16:11.867 12:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.867 12:00:17 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:16:11.867 12:00:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:11.867 12:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.867 12:00:17 -- common/autotest_common.sh@10 -- # set +x 00:16:11.867 ************************************ 00:16:11.867 START TEST bdev_qos_iops 00:16:11.867 ************************************ 00:16:11.867 12:00:17 -- common/autotest_common.sh@1114 -- # run_qos_test 17000 IOPS Malloc_0 00:16:11.867 12:00:17 -- bdev/blockdev.sh@387 -- # local qos_limit=17000 00:16:11.867 12:00:17 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:16:11.867 12:00:17 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:16:11.867 12:00:17 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:16:11.867 12:00:17 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:16:11.867 12:00:17 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:11.867 12:00:17 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:11.867 12:00:17 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:16:11.867 12:00:17 -- bdev/blockdev.sh@376 -- # tail -1 00:16:17.156 12:00:22 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 16968.94 67875.77 0.00 0.00 68884.00 0.00 0.00 ' 00:16:17.156 12:00:22 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:16:17.156 12:00:22 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:16:17.156 12:00:22 -- bdev/blockdev.sh@378 -- # iostat_result=16968.94 00:16:17.156 12:00:22 -- bdev/blockdev.sh@383 -- # echo 16968 00:16:17.156 12:00:22 -- bdev/blockdev.sh@390 -- # qos_result=16968 00:16:17.156 12:00:22 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:16:17.156 12:00:22 -- bdev/blockdev.sh@394 -- # lower_limit=15300 00:16:17.156 12:00:22 -- bdev/blockdev.sh@395 -- # upper_limit=18700 00:16:17.156 12:00:22 -- bdev/blockdev.sh@398 -- # '[' 16968 -lt 15300 ']' 00:16:17.156 12:00:22 -- bdev/blockdev.sh@398 -- # '[' 16968 -gt 18700 ']' 00:16:17.156 00:16:17.156 real 0m5.277s 00:16:17.156 user 0m0.176s 00:16:17.156 sys 0m0.030s 00:16:17.156 12:00:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:17.156 12:00:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.156 ************************************ 00:16:17.156 END TEST bdev_qos_iops 00:16:17.156 ************************************ 00:16:17.156 12:00:22 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:16:17.156 12:00:22 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:16:17.156 12:00:22 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:16:17.156 12:00:22 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:17.156 12:00:22 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:17.156 12:00:22 -- bdev/blockdev.sh@376 -- # grep Null_1 00:16:17.156 12:00:22 -- bdev/blockdev.sh@376 -- # tail -1 00:16:22.418 12:00:27 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 24601.35 98405.41 0.00 0.00 100352.00 0.00 0.00 ' 00:16:22.418 12:00:27 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:16:22.418 12:00:27 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:16:22.418 12:00:27 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:16:22.418 12:00:27 -- bdev/blockdev.sh@380 -- # iostat_result=100352.00 00:16:22.418 12:00:27 -- bdev/blockdev.sh@383 -- # echo 100352 00:16:22.418 12:00:27 -- bdev/blockdev.sh@425 -- # bw_limit=100352 00:16:22.418 12:00:27 -- bdev/blockdev.sh@426 -- # bw_limit=9 00:16:22.418 12:00:27 -- bdev/blockdev.sh@427 -- # '[' 9 -lt 2 ']' 00:16:22.418 12:00:27 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:16:22.418 12:00:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.418 12:00:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.418 12:00:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.418 12:00:27 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:16:22.419 12:00:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:22.419 12:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.419 12:00:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 ************************************ 00:16:22.419 START TEST bdev_qos_bw 00:16:22.419 ************************************ 00:16:22.419 12:00:27 -- common/autotest_common.sh@1114 -- # run_qos_test 9 BANDWIDTH Null_1 00:16:22.419 12:00:27 -- bdev/blockdev.sh@387 -- # local qos_limit=9 00:16:22.419 12:00:27 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:16:22.419 12:00:27 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:16:22.419 12:00:27 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:16:22.419 12:00:27 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:16:22.419 12:00:27 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:22.419 12:00:27 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:22.419 12:00:27 -- bdev/blockdev.sh@376 -- # grep Null_1 00:16:22.419 12:00:27 -- bdev/blockdev.sh@376 -- # tail -1 00:16:27.683 12:00:32 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2307.25 9229.01 0.00 0.00 9456.00 0.00 0.00 ' 00:16:27.683 12:00:32 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:16:27.683 12:00:32 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:16:27.683 12:00:32 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:16:27.683 12:00:32 -- bdev/blockdev.sh@380 -- # iostat_result=9456.00 00:16:27.683 12:00:32 -- bdev/blockdev.sh@383 -- # echo 9456 00:16:27.683 12:00:32 -- bdev/blockdev.sh@390 -- # qos_result=9456 00:16:27.683 12:00:32 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:16:27.683 12:00:32 -- bdev/blockdev.sh@392 -- # qos_limit=9216 00:16:27.683 12:00:32 -- bdev/blockdev.sh@394 -- # lower_limit=8294 00:16:27.683 12:00:32 -- bdev/blockdev.sh@395 -- # upper_limit=10137 00:16:27.683 12:00:32 -- bdev/blockdev.sh@398 -- # '[' 9456 -lt 8294 ']' 00:16:27.683 12:00:32 -- bdev/blockdev.sh@398 -- # '[' 9456 -gt 10137 ']' 00:16:27.683 00:16:27.683 real 0m5.243s 00:16:27.683 user 0m0.116s 00:16:27.683 sys 0m0.028s 00:16:27.683 12:00:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:27.683 12:00:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.683 ************************************ 00:16:27.683 END TEST bdev_qos_bw 00:16:27.683 ************************************ 00:16:27.683 12:00:32 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:16:27.683 12:00:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.683 12:00:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.683 12:00:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.683 12:00:32 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:16:27.683 12:00:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:27.683 12:00:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.683 12:00:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.683 ************************************ 00:16:27.683 START TEST bdev_qos_ro_bw 00:16:27.683 ************************************ 00:16:27.683 12:00:32 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:16:27.683 12:00:32 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:16:27.683 12:00:32 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:16:27.683 12:00:32 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:16:27.683 12:00:32 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:16:27.683 12:00:32 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:16:27.683 12:00:32 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:27.683 12:00:32 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:27.683 12:00:32 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:16:27.683 12:00:32 -- bdev/blockdev.sh@376 -- # tail -1 00:16:32.946 12:00:38 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.26 2049.06 0.00 0.00 2064.00 0.00 0.00 ' 00:16:32.946 12:00:38 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:16:32.946 12:00:38 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:16:32.946 12:00:38 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:16:32.946 12:00:38 -- bdev/blockdev.sh@380 -- # iostat_result=2064.00 00:16:32.946 12:00:38 -- bdev/blockdev.sh@383 -- # echo 2064 00:16:32.946 12:00:38 -- bdev/blockdev.sh@390 -- # qos_result=2064 00:16:32.946 12:00:38 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:16:32.946 12:00:38 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:16:32.946 12:00:38 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:16:32.946 12:00:38 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:16:32.946 12:00:38 -- bdev/blockdev.sh@398 -- # '[' 2064 -lt 1843 ']' 00:16:32.946 12:00:38 -- bdev/blockdev.sh@398 -- # '[' 2064 -gt 2252 ']' 00:16:32.946 00:16:32.946 real 0m5.171s 00:16:32.946 user 0m0.119s 00:16:32.946 sys 0m0.032s 00:16:32.946 12:00:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:32.946 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:16:32.946 ************************************ 00:16:32.946 END TEST bdev_qos_ro_bw 00:16:32.946 ************************************ 00:16:32.946 12:00:38 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:16:32.946 12:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.946 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:16:33.204 12:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.204 12:00:38 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:16:33.204 12:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.204 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:16:33.462 00:16:33.462 Latency(us) 00:16:33.462 [2024-11-29T12:00:38.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.462 [2024-11-29T12:00:38.973Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:33.462 Malloc_0 : 26.79 22919.07 89.53 0.00 0.00 11066.67 2546.97 503316.48 00:16:33.462 [2024-11-29T12:00:38.973Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:33.462 Null_1 : 26.92 22439.06 87.65 0.00 0.00 11380.28 811.75 136314.88 00:16:33.462 [2024-11-29T12:00:38.973Z] =================================================================================================================== 00:16:33.462 [2024-11-29T12:00:38.973Z] Total : 45358.13 177.18 0.00 0.00 11222.22 811.75 503316.48 00:16:33.462 0 00:16:33.462 12:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.462 12:00:38 -- bdev/blockdev.sh@459 -- # killprocess 122123 00:16:33.462 12:00:38 -- common/autotest_common.sh@936 -- # '[' -z 122123 ']' 00:16:33.462 12:00:38 -- common/autotest_common.sh@940 -- # kill -0 122123 00:16:33.462 12:00:38 -- common/autotest_common.sh@941 -- # uname 00:16:33.462 12:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.462 12:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122123 00:16:33.462 12:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:33.462 12:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:33.462 killing process with pid 122123 00:16:33.462 12:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122123' 00:16:33.462 Received shutdown signal, test time was about 26.957888 seconds 00:16:33.462 00:16:33.462 Latency(us) 00:16:33.462 [2024-11-29T12:00:38.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.462 [2024-11-29T12:00:38.973Z] =================================================================================================================== 00:16:33.462 [2024-11-29T12:00:38.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.462 12:00:38 -- common/autotest_common.sh@955 -- # kill 122123 00:16:33.462 12:00:38 -- common/autotest_common.sh@960 -- # wait 122123 00:16:33.721 12:00:39 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:16:33.721 00:16:33.721 real 0m28.541s 00:16:33.721 user 0m29.437s 00:16:33.721 sys 0m0.664s 00:16:33.721 12:00:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:33.721 12:00:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.721 ************************************ 00:16:33.721 END TEST bdev_qos 00:16:33.721 ************************************ 00:16:33.721 12:00:39 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:16:33.721 12:00:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:33.721 12:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.721 12:00:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.721 ************************************ 00:16:33.721 START TEST bdev_qd_sampling 00:16:33.721 ************************************ 00:16:33.721 12:00:39 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:16:33.721 12:00:39 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:16:33.721 12:00:39 -- bdev/blockdev.sh@539 -- # QD_PID=122593 00:16:33.721 Process bdev QD sampling period testing pid: 122593 00:16:33.721 12:00:39 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 122593' 00:16:33.721 12:00:39 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:16:33.721 12:00:39 -- bdev/blockdev.sh@542 -- # waitforlisten 122593 00:16:33.721 12:00:39 -- common/autotest_common.sh@829 -- # '[' -z 122593 ']' 00:16:33.721 12:00:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.721 12:00:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.721 12:00:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.721 12:00:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.721 12:00:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.721 12:00:39 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:16:33.979 [2024-11-29 12:00:39.258562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:33.979 [2024-11-29 12:00:39.259063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122593 ] 00:16:33.979 [2024-11-29 12:00:39.414572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:34.238 [2024-11-29 12:00:39.513940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.238 [2024-11-29 12:00:39.513950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.804 12:00:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.804 12:00:40 -- common/autotest_common.sh@862 -- # return 0 00:16:34.804 12:00:40 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:16:34.805 12:00:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.805 12:00:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.063 Malloc_QD 00:16:35.063 12:00:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.063 12:00:40 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:16:35.063 12:00:40 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:16:35.063 12:00:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.063 12:00:40 -- common/autotest_common.sh@899 -- # local i 00:16:35.063 12:00:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.063 12:00:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.063 12:00:40 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:35.063 12:00:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.063 12:00:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.063 12:00:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.063 12:00:40 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:16:35.063 12:00:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.063 12:00:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.063 [ 00:16:35.063 { 00:16:35.063 "name": "Malloc_QD", 00:16:35.063 "aliases": [ 00:16:35.063 "201963b5-820d-4a2d-969e-0a0943bf3b1c" 00:16:35.063 ], 00:16:35.063 "product_name": "Malloc disk", 00:16:35.063 "block_size": 512, 00:16:35.063 "num_blocks": 262144, 00:16:35.063 "uuid": "201963b5-820d-4a2d-969e-0a0943bf3b1c", 00:16:35.063 "assigned_rate_limits": { 00:16:35.063 "rw_ios_per_sec": 0, 00:16:35.063 "rw_mbytes_per_sec": 0, 00:16:35.063 "r_mbytes_per_sec": 0, 00:16:35.063 "w_mbytes_per_sec": 0 00:16:35.063 }, 00:16:35.063 "claimed": false, 00:16:35.063 "zoned": false, 00:16:35.063 "supported_io_types": { 00:16:35.063 "read": true, 00:16:35.063 "write": true, 00:16:35.063 "unmap": true, 00:16:35.063 "write_zeroes": true, 00:16:35.063 "flush": true, 00:16:35.063 "reset": true, 00:16:35.063 "compare": false, 00:16:35.063 "compare_and_write": false, 00:16:35.063 "abort": true, 00:16:35.063 "nvme_admin": false, 00:16:35.063 "nvme_io": false 00:16:35.063 }, 00:16:35.063 "memory_domains": [ 00:16:35.063 { 00:16:35.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.063 "dma_device_type": 2 00:16:35.063 } 00:16:35.063 ], 00:16:35.063 "driver_specific": {} 00:16:35.063 } 00:16:35.063 ] 00:16:35.063 12:00:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.063 12:00:40 -- common/autotest_common.sh@905 -- # return 0 00:16:35.063 12:00:40 -- bdev/blockdev.sh@548 -- # sleep 2 00:16:35.063 12:00:40 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:35.063 Running I/O for 5 seconds... 00:16:36.967 12:00:42 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:16:36.967 12:00:42 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:16:36.967 12:00:42 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:16:36.967 12:00:42 -- bdev/blockdev.sh@519 -- # local iostats 00:16:36.967 12:00:42 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:16:36.967 12:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.967 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:36.967 12:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.967 12:00:42 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:16:36.967 12:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.967 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:36.967 12:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.967 12:00:42 -- bdev/blockdev.sh@523 -- # iostats='{ 00:16:36.967 "tick_rate": 2200000000, 00:16:36.967 "ticks": 1590128066260, 00:16:36.967 "bdevs": [ 00:16:36.967 { 00:16:36.967 "name": "Malloc_QD", 00:16:36.967 "bytes_read": 850432512, 00:16:36.967 "num_read_ops": 207619, 00:16:36.967 "bytes_written": 0, 00:16:36.968 "num_write_ops": 0, 00:16:36.968 "bytes_unmapped": 0, 00:16:36.968 "num_unmap_ops": 0, 00:16:36.968 "bytes_copied": 0, 00:16:36.968 "num_copy_ops": 0, 00:16:36.968 "read_latency_ticks": 2165664064112, 00:16:36.968 "max_read_latency_ticks": 14209464, 00:16:36.968 "min_read_latency_ticks": 450604, 00:16:36.968 "write_latency_ticks": 0, 00:16:36.968 "max_write_latency_ticks": 0, 00:16:36.968 "min_write_latency_ticks": 0, 00:16:36.968 "unmap_latency_ticks": 0, 00:16:36.968 "max_unmap_latency_ticks": 0, 00:16:36.968 "min_unmap_latency_ticks": 0, 00:16:36.968 "copy_latency_ticks": 0, 00:16:36.968 "max_copy_latency_ticks": 0, 00:16:36.968 "min_copy_latency_ticks": 0, 00:16:36.968 "io_error": {}, 00:16:36.968 "queue_depth_polling_period": 10, 00:16:36.968 "queue_depth": 512, 00:16:36.968 "io_time": 30, 00:16:36.968 "weighted_io_time": 15360 00:16:36.968 } 00:16:36.968 ] 00:16:36.968 }' 00:16:36.968 12:00:42 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:16:36.968 12:00:42 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:16:36.968 12:00:42 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:16:36.968 12:00:42 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:16:36.968 12:00:42 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:16:36.968 12:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.968 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:36.968 00:16:36.968 Latency(us) 00:16:36.968 [2024-11-29T12:00:42.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.968 [2024-11-29T12:00:42.479Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:36.968 Malloc_QD : 2.00 54091.86 211.30 0.00 0.00 4720.91 1020.28 5153.51 00:16:36.968 [2024-11-29T12:00:42.479Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:36.968 Malloc_QD : 2.00 53937.49 210.69 0.00 0.00 4733.71 796.86 6464.23 00:16:36.968 [2024-11-29T12:00:42.479Z] =================================================================================================================== 00:16:36.968 [2024-11-29T12:00:42.479Z] Total : 108029.35 421.99 0.00 0.00 4727.31 796.86 6464.23 00:16:37.226 0 00:16:37.226 12:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.226 12:00:42 -- bdev/blockdev.sh@552 -- # killprocess 122593 00:16:37.226 12:00:42 -- common/autotest_common.sh@936 -- # '[' -z 122593 ']' 00:16:37.226 12:00:42 -- common/autotest_common.sh@940 -- # kill -0 122593 00:16:37.226 12:00:42 -- common/autotest_common.sh@941 -- # uname 00:16:37.226 12:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.226 12:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122593 00:16:37.226 12:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.226 12:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.226 killing process with pid 122593 00:16:37.226 12:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122593' 00:16:37.226 12:00:42 -- common/autotest_common.sh@955 -- # kill 122593 00:16:37.226 Received shutdown signal, test time was about 2.055135 seconds 00:16:37.226 00:16:37.226 Latency(us) 00:16:37.226 [2024-11-29T12:00:42.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.226 [2024-11-29T12:00:42.737Z] =================================================================================================================== 00:16:37.226 [2024-11-29T12:00:42.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.226 12:00:42 -- common/autotest_common.sh@960 -- # wait 122593 00:16:37.485 12:00:42 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:16:37.485 00:16:37.485 real 0m3.587s 00:16:37.485 user 0m7.088s 00:16:37.485 sys 0m0.338s 00:16:37.485 12:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:37.485 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.485 ************************************ 00:16:37.485 END TEST bdev_qd_sampling 00:16:37.485 ************************************ 00:16:37.485 12:00:42 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:16:37.485 12:00:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:37.485 12:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.485 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.485 ************************************ 00:16:37.485 START TEST bdev_error 00:16:37.485 ************************************ 00:16:37.485 12:00:42 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:16:37.485 12:00:42 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:16:37.485 12:00:42 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:16:37.485 12:00:42 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:16:37.485 12:00:42 -- bdev/blockdev.sh@470 -- # ERR_PID=122676 00:16:37.485 12:00:42 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 122676' 00:16:37.485 Process error testing pid: 122676 00:16:37.485 12:00:42 -- bdev/blockdev.sh@472 -- # waitforlisten 122676 00:16:37.485 12:00:42 -- common/autotest_common.sh@829 -- # '[' -z 122676 ']' 00:16:37.485 12:00:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.485 12:00:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.485 12:00:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.485 12:00:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.485 12:00:42 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:16:37.485 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.485 [2024-11-29 12:00:42.904475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:37.485 [2024-11-29 12:00:42.904953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122676 ] 00:16:37.745 [2024-11-29 12:00:43.051074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.745 [2024-11-29 12:00:43.109607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.681 12:00:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.681 12:00:43 -- common/autotest_common.sh@862 -- # return 0 00:16:38.681 12:00:43 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:16:38.681 12:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 Dev_1 00:16:38.682 12:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:43 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:16:38.682 12:00:43 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:16:38.682 12:00:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.682 12:00:43 -- common/autotest_common.sh@899 -- # local i 00:16:38.682 12:00:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.682 12:00:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.682 12:00:43 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:38.682 12:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 12:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:43 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:16:38.682 12:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 [ 00:16:38.682 { 00:16:38.682 "name": "Dev_1", 00:16:38.682 "aliases": [ 00:16:38.682 "0d259096-2606-44e0-9d33-91aa042f8d39" 00:16:38.682 ], 00:16:38.682 "product_name": "Malloc disk", 00:16:38.682 "block_size": 512, 00:16:38.682 "num_blocks": 262144, 00:16:38.682 "uuid": "0d259096-2606-44e0-9d33-91aa042f8d39", 00:16:38.682 "assigned_rate_limits": { 00:16:38.682 "rw_ios_per_sec": 0, 00:16:38.682 "rw_mbytes_per_sec": 0, 00:16:38.682 "r_mbytes_per_sec": 0, 00:16:38.682 "w_mbytes_per_sec": 0 00:16:38.682 }, 00:16:38.682 "claimed": false, 00:16:38.682 "zoned": false, 00:16:38.682 "supported_io_types": { 00:16:38.682 "read": true, 00:16:38.682 "write": true, 00:16:38.682 "unmap": true, 00:16:38.682 "write_zeroes": true, 00:16:38.682 "flush": true, 00:16:38.682 "reset": true, 00:16:38.682 "compare": false, 00:16:38.682 "compare_and_write": false, 00:16:38.682 "abort": true, 00:16:38.682 "nvme_admin": false, 00:16:38.682 "nvme_io": false 00:16:38.682 }, 00:16:38.682 "memory_domains": [ 00:16:38.682 { 00:16:38.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.682 "dma_device_type": 2 00:16:38.682 } 00:16:38.682 ], 00:16:38.682 "driver_specific": {} 00:16:38.682 } 00:16:38.682 ] 00:16:38.682 12:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:43 -- common/autotest_common.sh@905 -- # return 0 00:16:38.682 12:00:43 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:16:38.682 12:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 true 00:16:38.682 12:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:43 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:16:38.682 12:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 Dev_2 00:16:38.682 12:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:44 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:16:38.682 12:00:44 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:16:38.682 12:00:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.682 12:00:44 -- common/autotest_common.sh@899 -- # local i 00:16:38.682 12:00:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.682 12:00:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.682 12:00:44 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:38.682 12:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 12:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:44 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:16:38.682 12:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 [ 00:16:38.682 { 00:16:38.682 "name": "Dev_2", 00:16:38.682 "aliases": [ 00:16:38.682 "d70e76c0-1a0a-4238-993d-661221adeece" 00:16:38.682 ], 00:16:38.682 "product_name": "Malloc disk", 00:16:38.682 "block_size": 512, 00:16:38.682 "num_blocks": 262144, 00:16:38.682 "uuid": "d70e76c0-1a0a-4238-993d-661221adeece", 00:16:38.682 "assigned_rate_limits": { 00:16:38.682 "rw_ios_per_sec": 0, 00:16:38.682 "rw_mbytes_per_sec": 0, 00:16:38.682 "r_mbytes_per_sec": 0, 00:16:38.682 "w_mbytes_per_sec": 0 00:16:38.682 }, 00:16:38.682 "claimed": false, 00:16:38.682 "zoned": false, 00:16:38.682 "supported_io_types": { 00:16:38.682 "read": true, 00:16:38.682 "write": true, 00:16:38.682 "unmap": true, 00:16:38.682 "write_zeroes": true, 00:16:38.682 "flush": true, 00:16:38.682 "reset": true, 00:16:38.682 "compare": false, 00:16:38.682 "compare_and_write": false, 00:16:38.682 "abort": true, 00:16:38.682 "nvme_admin": false, 00:16:38.682 "nvme_io": false 00:16:38.682 }, 00:16:38.682 "memory_domains": [ 00:16:38.682 { 00:16:38.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.682 "dma_device_type": 2 00:16:38.682 } 00:16:38.682 ], 00:16:38.682 "driver_specific": {} 00:16:38.682 } 00:16:38.682 ] 00:16:38.682 12:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:44 -- common/autotest_common.sh@905 -- # return 0 00:16:38.682 12:00:44 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:16:38.682 12:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.682 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 12:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.682 12:00:44 -- bdev/blockdev.sh@482 -- # sleep 1 00:16:38.682 12:00:44 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:16:38.682 Running I/O for 5 seconds... 00:16:39.617 12:00:45 -- bdev/blockdev.sh@485 -- # kill -0 122676 00:16:39.617 Process is existed as continue on error is set. Pid: 122676 00:16:39.617 12:00:45 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 122676' 00:16:39.617 12:00:45 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:16:39.617 12:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.617 12:00:45 -- common/autotest_common.sh@10 -- # set +x 00:16:39.617 12:00:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.617 12:00:45 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:16:39.617 12:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.617 12:00:45 -- common/autotest_common.sh@10 -- # set +x 00:16:39.617 12:00:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.617 12:00:45 -- bdev/blockdev.sh@495 -- # sleep 5 00:16:39.876 Timeout while waiting for response: 00:16:39.876 00:16:39.876 00:16:44.063 00:16:44.063 Latency(us) 00:16:44.063 [2024-11-29T12:00:49.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.063 [2024-11-29T12:00:49.574Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:44.063 EE_Dev_1 : 0.90 38677.79 151.09 5.53 0.00 410.57 173.15 875.05 00:16:44.063 [2024-11-29T12:00:49.574Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:44.063 Dev_2 : 5.00 85990.32 335.90 0.00 0.00 183.10 112.64 26333.56 00:16:44.063 [2024-11-29T12:00:49.574Z] =================================================================================================================== 00:16:44.063 [2024-11-29T12:00:49.574Z] Total : 124668.11 486.98 5.53 0.00 200.21 112.64 26333.56 00:16:44.628 12:00:50 -- bdev/blockdev.sh@497 -- # killprocess 122676 00:16:44.628 12:00:50 -- common/autotest_common.sh@936 -- # '[' -z 122676 ']' 00:16:44.628 12:00:50 -- common/autotest_common.sh@940 -- # kill -0 122676 00:16:44.628 12:00:50 -- common/autotest_common.sh@941 -- # uname 00:16:44.628 12:00:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.628 12:00:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122676 00:16:44.886 12:00:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:44.886 12:00:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:44.886 killing process with pid 122676 00:16:44.886 12:00:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122676' 00:16:44.886 Received shutdown signal, test time was about 5.000000 seconds 00:16:44.886 00:16:44.886 Latency(us) 00:16:44.886 [2024-11-29T12:00:50.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.886 [2024-11-29T12:00:50.397Z] =================================================================================================================== 00:16:44.886 [2024-11-29T12:00:50.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.886 12:00:50 -- common/autotest_common.sh@955 -- # kill 122676 00:16:44.886 12:00:50 -- common/autotest_common.sh@960 -- # wait 122676 00:16:45.144 12:00:50 -- bdev/blockdev.sh@501 -- # ERR_PID=122779 00:16:45.144 12:00:50 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:16:45.144 Process error testing pid: 122779 00:16:45.144 12:00:50 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 122779' 00:16:45.144 12:00:50 -- bdev/blockdev.sh@503 -- # waitforlisten 122779 00:16:45.144 12:00:50 -- common/autotest_common.sh@829 -- # '[' -z 122779 ']' 00:16:45.144 12:00:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.144 12:00:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.144 12:00:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.144 12:00:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.144 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.144 [2024-11-29 12:00:50.509547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:45.144 [2024-11-29 12:00:50.509820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122779 ] 00:16:45.403 [2024-11-29 12:00:50.657319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.403 [2024-11-29 12:00:50.743300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.339 12:00:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.339 12:00:51 -- common/autotest_common.sh@862 -- # return 0 00:16:46.339 12:00:51 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:16:46.339 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.339 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.339 Dev_1 00:16:46.339 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.339 12:00:51 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:16:46.339 12:00:51 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:16:46.339 12:00:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:46.339 12:00:51 -- common/autotest_common.sh@899 -- # local i 00:16:46.339 12:00:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:46.339 12:00:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:46.339 12:00:51 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:46.339 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.339 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.339 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.339 12:00:51 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:16:46.339 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.339 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.339 [ 00:16:46.339 { 00:16:46.339 "name": "Dev_1", 00:16:46.339 "aliases": [ 00:16:46.339 "0871428c-321c-44e0-835e-912859afbc7b" 00:16:46.339 ], 00:16:46.339 "product_name": "Malloc disk", 00:16:46.339 "block_size": 512, 00:16:46.339 "num_blocks": 262144, 00:16:46.339 "uuid": "0871428c-321c-44e0-835e-912859afbc7b", 00:16:46.339 "assigned_rate_limits": { 00:16:46.339 "rw_ios_per_sec": 0, 00:16:46.339 "rw_mbytes_per_sec": 0, 00:16:46.339 "r_mbytes_per_sec": 0, 00:16:46.339 "w_mbytes_per_sec": 0 00:16:46.339 }, 00:16:46.339 "claimed": false, 00:16:46.339 "zoned": false, 00:16:46.339 "supported_io_types": { 00:16:46.339 "read": true, 00:16:46.339 "write": true, 00:16:46.339 "unmap": true, 00:16:46.339 "write_zeroes": true, 00:16:46.339 "flush": true, 00:16:46.339 "reset": true, 00:16:46.339 "compare": false, 00:16:46.339 "compare_and_write": false, 00:16:46.339 "abort": true, 00:16:46.339 "nvme_admin": false, 00:16:46.339 "nvme_io": false 00:16:46.339 }, 00:16:46.339 "memory_domains": [ 00:16:46.339 { 00:16:46.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.339 "dma_device_type": 2 00:16:46.339 } 00:16:46.339 ], 00:16:46.339 "driver_specific": {} 00:16:46.339 } 00:16:46.339 ] 00:16:46.339 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.339 12:00:51 -- common/autotest_common.sh@905 -- # return 0 00:16:46.339 12:00:51 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:16:46.339 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.339 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.339 true 00:16:46.339 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.339 12:00:51 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:16:46.339 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.339 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.340 Dev_2 00:16:46.340 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.340 12:00:51 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:16:46.340 12:00:51 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:16:46.340 12:00:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:46.340 12:00:51 -- common/autotest_common.sh@899 -- # local i 00:16:46.340 12:00:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:46.340 12:00:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:46.340 12:00:51 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:46.340 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.340 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.340 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.340 12:00:51 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:16:46.340 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.340 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.340 [ 00:16:46.340 { 00:16:46.340 "name": "Dev_2", 00:16:46.340 "aliases": [ 00:16:46.340 "20c762c7-c9b5-4e8e-b2d6-2a750c1cfc52" 00:16:46.340 ], 00:16:46.340 "product_name": "Malloc disk", 00:16:46.340 "block_size": 512, 00:16:46.340 "num_blocks": 262144, 00:16:46.340 "uuid": "20c762c7-c9b5-4e8e-b2d6-2a750c1cfc52", 00:16:46.340 "assigned_rate_limits": { 00:16:46.340 "rw_ios_per_sec": 0, 00:16:46.340 "rw_mbytes_per_sec": 0, 00:16:46.340 "r_mbytes_per_sec": 0, 00:16:46.340 "w_mbytes_per_sec": 0 00:16:46.340 }, 00:16:46.340 "claimed": false, 00:16:46.340 "zoned": false, 00:16:46.340 "supported_io_types": { 00:16:46.340 "read": true, 00:16:46.340 "write": true, 00:16:46.340 "unmap": true, 00:16:46.340 "write_zeroes": true, 00:16:46.340 "flush": true, 00:16:46.340 "reset": true, 00:16:46.340 "compare": false, 00:16:46.340 "compare_and_write": false, 00:16:46.340 "abort": true, 00:16:46.340 "nvme_admin": false, 00:16:46.340 "nvme_io": false 00:16:46.340 }, 00:16:46.340 "memory_domains": [ 00:16:46.340 { 00:16:46.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.340 "dma_device_type": 2 00:16:46.340 } 00:16:46.340 ], 00:16:46.340 "driver_specific": {} 00:16:46.340 } 00:16:46.340 ] 00:16:46.340 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.340 12:00:51 -- common/autotest_common.sh@905 -- # return 0 00:16:46.340 12:00:51 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:16:46.340 12:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.340 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.340 12:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.340 12:00:51 -- bdev/blockdev.sh@513 -- # NOT wait 122779 00:16:46.340 12:00:51 -- common/autotest_common.sh@650 -- # local es=0 00:16:46.340 12:00:51 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:16:46.340 12:00:51 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 122779 00:16:46.340 12:00:51 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:46.340 12:00:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.340 12:00:51 -- common/autotest_common.sh@642 -- # type -t wait 00:16:46.340 12:00:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.340 12:00:51 -- common/autotest_common.sh@653 -- # wait 122779 00:16:46.340 Running I/O for 5 seconds... 00:16:46.340 task offset: 183272 on job bdev=EE_Dev_1 fails 00:16:46.340 00:16:46.340 Latency(us) 00:16:46.340 [2024-11-29T12:00:51.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.340 [2024-11-29T12:00:51.851Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:46.340 [2024-11-29T12:00:51.851Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:16:46.340 EE_Dev_1 : 0.00 23605.15 92.21 5364.81 0.00 453.96 165.70 826.65 00:16:46.340 [2024-11-29T12:00:51.851Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:46.340 Dev_2 : 0.00 17997.75 70.30 0.00 0.00 586.65 166.63 1064.96 00:16:46.340 [2024-11-29T12:00:51.851Z] =================================================================================================================== 00:16:46.340 [2024-11-29T12:00:51.851Z] Total : 41602.90 162.51 5364.81 0.00 525.92 165.70 1064.96 00:16:46.340 [2024-11-29 12:00:51.770656] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:46.340 request: 00:16:46.340 { 00:16:46.340 "method": "perform_tests", 00:16:46.340 "req_id": 1 00:16:46.340 } 00:16:46.340 Got JSON-RPC error response 00:16:46.340 response: 00:16:46.340 { 00:16:46.340 "code": -32603, 00:16:46.340 "message": "bdevperf failed with error Operation not permitted" 00:16:46.340 } 00:16:46.906 12:00:52 -- common/autotest_common.sh@653 -- # es=255 00:16:46.906 12:00:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.906 12:00:52 -- common/autotest_common.sh@662 -- # es=127 00:16:46.906 12:00:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:16:46.906 12:00:52 -- common/autotest_common.sh@670 -- # es=1 00:16:46.906 12:00:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.906 00:16:46.906 real 0m9.291s 00:16:46.906 user 0m9.606s 00:16:46.906 sys 0m0.839s 00:16:46.906 12:00:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:46.906 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:16:46.906 ************************************ 00:16:46.906 END TEST bdev_error 00:16:46.906 ************************************ 00:16:46.906 12:00:52 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:16:46.906 12:00:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:46.906 12:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.906 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:16:46.906 ************************************ 00:16:46.906 START TEST bdev_stat 00:16:46.906 ************************************ 00:16:46.906 12:00:52 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:16:46.906 12:00:52 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:16:46.906 12:00:52 -- bdev/blockdev.sh@594 -- # STAT_PID=122831 00:16:46.906 12:00:52 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 122831' 00:16:46.906 Process Bdev IO statistics testing pid: 122831 00:16:46.906 12:00:52 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:16:46.906 12:00:52 -- bdev/blockdev.sh@597 -- # waitforlisten 122831 00:16:46.906 12:00:52 -- common/autotest_common.sh@829 -- # '[' -z 122831 ']' 00:16:46.906 12:00:52 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:16:46.906 12:00:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.906 12:00:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.906 12:00:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.906 12:00:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.906 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:16:46.906 [2024-11-29 12:00:52.255176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:46.906 [2024-11-29 12:00:52.255628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122831 ] 00:16:46.906 [2024-11-29 12:00:52.404876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:47.166 [2024-11-29 12:00:52.490205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.166 [2024-11-29 12:00:52.490204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.103 12:00:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.103 12:00:53 -- common/autotest_common.sh@862 -- # return 0 00:16:48.103 12:00:53 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:16:48.103 12:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.103 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:48.103 Malloc_STAT 00:16:48.103 12:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.103 12:00:53 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:16:48.103 12:00:53 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:16:48.103 12:00:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.103 12:00:53 -- common/autotest_common.sh@899 -- # local i 00:16:48.103 12:00:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.103 12:00:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.103 12:00:53 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:48.103 12:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.103 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:48.103 12:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.103 12:00:53 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:16:48.103 12:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.103 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:48.103 [ 00:16:48.103 { 00:16:48.103 "name": "Malloc_STAT", 00:16:48.103 "aliases": [ 00:16:48.103 "88d87ccf-2e1b-46d0-a3f6-430ea625270c" 00:16:48.103 ], 00:16:48.103 "product_name": "Malloc disk", 00:16:48.103 "block_size": 512, 00:16:48.103 "num_blocks": 262144, 00:16:48.103 "uuid": "88d87ccf-2e1b-46d0-a3f6-430ea625270c", 00:16:48.103 "assigned_rate_limits": { 00:16:48.103 "rw_ios_per_sec": 0, 00:16:48.103 "rw_mbytes_per_sec": 0, 00:16:48.103 "r_mbytes_per_sec": 0, 00:16:48.103 "w_mbytes_per_sec": 0 00:16:48.103 }, 00:16:48.103 "claimed": false, 00:16:48.103 "zoned": false, 00:16:48.103 "supported_io_types": { 00:16:48.103 "read": true, 00:16:48.103 "write": true, 00:16:48.103 "unmap": true, 00:16:48.103 "write_zeroes": true, 00:16:48.103 "flush": true, 00:16:48.103 "reset": true, 00:16:48.103 "compare": false, 00:16:48.103 "compare_and_write": false, 00:16:48.103 "abort": true, 00:16:48.103 "nvme_admin": false, 00:16:48.103 "nvme_io": false 00:16:48.103 }, 00:16:48.103 "memory_domains": [ 00:16:48.103 { 00:16:48.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.103 "dma_device_type": 2 00:16:48.103 } 00:16:48.103 ], 00:16:48.103 "driver_specific": {} 00:16:48.103 } 00:16:48.103 ] 00:16:48.103 12:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.103 12:00:53 -- common/autotest_common.sh@905 -- # return 0 00:16:48.103 12:00:53 -- bdev/blockdev.sh@603 -- # sleep 2 00:16:48.103 12:00:53 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:48.103 Running I/O for 10 seconds... 00:16:50.017 12:00:55 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:16:50.017 12:00:55 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:16:50.017 12:00:55 -- bdev/blockdev.sh@558 -- # local iostats 00:16:50.017 12:00:55 -- bdev/blockdev.sh@559 -- # local io_count1 00:16:50.017 12:00:55 -- bdev/blockdev.sh@560 -- # local io_count2 00:16:50.017 12:00:55 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:16:50.017 12:00:55 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:16:50.017 12:00:55 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:16:50.017 12:00:55 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:16:50.017 12:00:55 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:50.017 12:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.017 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 12:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.017 12:00:55 -- bdev/blockdev.sh@566 -- # iostats='{ 00:16:50.017 "tick_rate": 2200000000, 00:16:50.017 "ticks": 1618629536686, 00:16:50.017 "bdevs": [ 00:16:50.017 { 00:16:50.017 "name": "Malloc_STAT", 00:16:50.017 "bytes_read": 851481088, 00:16:50.017 "num_read_ops": 207875, 00:16:50.017 "bytes_written": 0, 00:16:50.017 "num_write_ops": 0, 00:16:50.017 "bytes_unmapped": 0, 00:16:50.017 "num_unmap_ops": 0, 00:16:50.017 "bytes_copied": 0, 00:16:50.017 "num_copy_ops": 0, 00:16:50.017 "read_latency_ticks": 2150375700418, 00:16:50.017 "max_read_latency_ticks": 13056324, 00:16:50.017 "min_read_latency_ticks": 491596, 00:16:50.017 "write_latency_ticks": 0, 00:16:50.017 "max_write_latency_ticks": 0, 00:16:50.017 "min_write_latency_ticks": 0, 00:16:50.017 "unmap_latency_ticks": 0, 00:16:50.017 "max_unmap_latency_ticks": 0, 00:16:50.017 "min_unmap_latency_ticks": 0, 00:16:50.017 "copy_latency_ticks": 0, 00:16:50.017 "max_copy_latency_ticks": 0, 00:16:50.017 "min_copy_latency_ticks": 0, 00:16:50.017 "io_error": {} 00:16:50.017 } 00:16:50.017 ] 00:16:50.017 }' 00:16:50.017 12:00:55 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:16:50.017 12:00:55 -- bdev/blockdev.sh@567 -- # io_count1=207875 00:16:50.017 12:00:55 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:16:50.017 12:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.017 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 12:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.017 12:00:55 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:16:50.017 "tick_rate": 2200000000, 00:16:50.017 "ticks": 1618784025609, 00:16:50.017 "name": "Malloc_STAT", 00:16:50.017 "channels": [ 00:16:50.017 { 00:16:50.017 "thread_id": 2, 00:16:50.017 "bytes_read": 439353344, 00:16:50.017 "num_read_ops": 107264, 00:16:50.017 "bytes_written": 0, 00:16:50.017 "num_write_ops": 0, 00:16:50.017 "bytes_unmapped": 0, 00:16:50.017 "num_unmap_ops": 0, 00:16:50.017 "bytes_copied": 0, 00:16:50.017 "num_copy_ops": 0, 00:16:50.017 "read_latency_ticks": 1113628454799, 00:16:50.017 "max_read_latency_ticks": 13035404, 00:16:50.017 "min_read_latency_ticks": 8692216, 00:16:50.017 "write_latency_ticks": 0, 00:16:50.017 "max_write_latency_ticks": 0, 00:16:50.017 "min_write_latency_ticks": 0, 00:16:50.017 "unmap_latency_ticks": 0, 00:16:50.017 "max_unmap_latency_ticks": 0, 00:16:50.017 "min_unmap_latency_ticks": 0, 00:16:50.017 "copy_latency_ticks": 0, 00:16:50.017 "max_copy_latency_ticks": 0, 00:16:50.017 "min_copy_latency_ticks": 0 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "thread_id": 3, 00:16:50.017 "bytes_read": 444596224, 00:16:50.017 "num_read_ops": 108544, 00:16:50.017 "bytes_written": 0, 00:16:50.017 "num_write_ops": 0, 00:16:50.017 "bytes_unmapped": 0, 00:16:50.017 "num_unmap_ops": 0, 00:16:50.017 "bytes_copied": 0, 00:16:50.017 "num_copy_ops": 0, 00:16:50.017 "read_latency_ticks": 1116307430013, 00:16:50.017 "max_read_latency_ticks": 13056324, 00:16:50.017 "min_read_latency_ticks": 8405719, 00:16:50.017 "write_latency_ticks": 0, 00:16:50.017 "max_write_latency_ticks": 0, 00:16:50.017 "min_write_latency_ticks": 0, 00:16:50.017 "unmap_latency_ticks": 0, 00:16:50.017 "max_unmap_latency_ticks": 0, 00:16:50.017 "min_unmap_latency_ticks": 0, 00:16:50.017 "copy_latency_ticks": 0, 00:16:50.017 "max_copy_latency_ticks": 0, 00:16:50.017 "min_copy_latency_ticks": 0 00:16:50.017 } 00:16:50.017 ] 00:16:50.017 }' 00:16:50.017 12:00:55 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:16:50.017 12:00:55 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=107264 00:16:50.017 12:00:55 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=107264 00:16:50.017 12:00:55 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:16:50.274 12:00:55 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=108544 00:16:50.274 12:00:55 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=215808 00:16:50.274 12:00:55 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:50.274 12:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.274 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.274 12:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.274 12:00:55 -- bdev/blockdev.sh@575 -- # iostats='{ 00:16:50.274 "tick_rate": 2200000000, 00:16:50.274 "ticks": 1619038830994, 00:16:50.274 "bdevs": [ 00:16:50.274 { 00:16:50.274 "name": "Malloc_STAT", 00:16:50.274 "bytes_read": 936415744, 00:16:50.274 "num_read_ops": 228611, 00:16:50.274 "bytes_written": 0, 00:16:50.274 "num_write_ops": 0, 00:16:50.274 "bytes_unmapped": 0, 00:16:50.274 "num_unmap_ops": 0, 00:16:50.274 "bytes_copied": 0, 00:16:50.274 "num_copy_ops": 0, 00:16:50.274 "read_latency_ticks": 2358732268170, 00:16:50.274 "max_read_latency_ticks": 13822861, 00:16:50.274 "min_read_latency_ticks": 491596, 00:16:50.274 "write_latency_ticks": 0, 00:16:50.274 "max_write_latency_ticks": 0, 00:16:50.274 "min_write_latency_ticks": 0, 00:16:50.274 "unmap_latency_ticks": 0, 00:16:50.274 "max_unmap_latency_ticks": 0, 00:16:50.274 "min_unmap_latency_ticks": 0, 00:16:50.274 "copy_latency_ticks": 0, 00:16:50.274 "max_copy_latency_ticks": 0, 00:16:50.274 "min_copy_latency_ticks": 0, 00:16:50.274 "io_error": {} 00:16:50.274 } 00:16:50.274 ] 00:16:50.274 }' 00:16:50.274 12:00:55 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:16:50.274 12:00:55 -- bdev/blockdev.sh@576 -- # io_count2=228611 00:16:50.274 12:00:55 -- bdev/blockdev.sh@581 -- # '[' 215808 -lt 207875 ']' 00:16:50.274 12:00:55 -- bdev/blockdev.sh@581 -- # '[' 215808 -gt 228611 ']' 00:16:50.274 12:00:55 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:16:50.274 12:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.274 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.274 00:16:50.274 Latency(us) 00:16:50.274 [2024-11-29T12:00:55.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.274 [2024-11-29T12:00:55.785Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:50.274 Malloc_STAT : 2.17 54247.98 211.91 0.00 0.00 4707.79 1042.62 6285.50 00:16:50.274 [2024-11-29T12:00:55.785Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:50.274 Malloc_STAT : 2.17 54915.90 214.52 0.00 0.00 4650.60 815.48 5957.82 00:16:50.274 [2024-11-29T12:00:55.785Z] =================================================================================================================== 00:16:50.274 [2024-11-29T12:00:55.785Z] Total : 109163.88 426.42 0.00 0.00 4679.01 815.48 6285.50 00:16:50.274 0 00:16:50.274 12:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.274 12:00:55 -- bdev/blockdev.sh@607 -- # killprocess 122831 00:16:50.274 12:00:55 -- common/autotest_common.sh@936 -- # '[' -z 122831 ']' 00:16:50.274 12:00:55 -- common/autotest_common.sh@940 -- # kill -0 122831 00:16:50.274 12:00:55 -- common/autotest_common.sh@941 -- # uname 00:16:50.274 12:00:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.274 12:00:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122831 00:16:50.274 12:00:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:50.274 12:00:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:50.274 killing process with pid 122831 00:16:50.274 12:00:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122831' 00:16:50.274 Received shutdown signal, test time was about 2.226496 seconds 00:16:50.274 00:16:50.274 Latency(us) 00:16:50.274 [2024-11-29T12:00:55.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.274 [2024-11-29T12:00:55.785Z] =================================================================================================================== 00:16:50.274 [2024-11-29T12:00:55.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.274 12:00:55 -- common/autotest_common.sh@955 -- # kill 122831 00:16:50.274 12:00:55 -- common/autotest_common.sh@960 -- # wait 122831 00:16:50.531 12:00:55 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:16:50.531 00:16:50.531 real 0m3.745s 00:16:50.531 user 0m7.468s 00:16:50.531 sys 0m0.398s 00:16:50.531 12:00:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:50.531 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.531 ************************************ 00:16:50.531 END TEST bdev_stat 00:16:50.531 ************************************ 00:16:50.531 12:00:55 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:16:50.531 12:00:55 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:16:50.531 12:00:55 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:16:50.531 12:00:55 -- bdev/blockdev.sh@809 -- # cleanup 00:16:50.531 12:00:55 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:50.531 12:00:55 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:50.531 12:00:55 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:16:50.531 12:00:55 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:16:50.531 12:00:55 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:16:50.531 12:00:55 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:16:50.531 00:16:50.531 real 1m57.287s 00:16:50.531 user 5m14.472s 00:16:50.531 sys 0m20.847s 00:16:50.531 12:00:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:50.531 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.531 ************************************ 00:16:50.531 END TEST blockdev_general 00:16:50.531 ************************************ 00:16:50.531 12:00:56 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:50.531 12:00:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:50.531 12:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.531 12:00:56 -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 ************************************ 00:16:50.788 START TEST bdev_raid 00:16:50.788 ************************************ 00:16:50.788 12:00:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:50.788 * Looking for test storage... 00:16:50.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:50.788 12:00:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:50.788 12:00:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:50.788 12:00:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:50.788 12:00:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:50.788 12:00:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:50.788 12:00:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:50.788 12:00:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:50.788 12:00:56 -- scripts/common.sh@335 -- # IFS=.-: 00:16:50.788 12:00:56 -- scripts/common.sh@335 -- # read -ra ver1 00:16:50.788 12:00:56 -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.788 12:00:56 -- scripts/common.sh@336 -- # read -ra ver2 00:16:50.788 12:00:56 -- scripts/common.sh@337 -- # local 'op=<' 00:16:50.788 12:00:56 -- scripts/common.sh@339 -- # ver1_l=2 00:16:50.788 12:00:56 -- scripts/common.sh@340 -- # ver2_l=1 00:16:50.788 12:00:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:50.788 12:00:56 -- scripts/common.sh@343 -- # case "$op" in 00:16:50.789 12:00:56 -- scripts/common.sh@344 -- # : 1 00:16:50.789 12:00:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:50.789 12:00:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.789 12:00:56 -- scripts/common.sh@364 -- # decimal 1 00:16:50.789 12:00:56 -- scripts/common.sh@352 -- # local d=1 00:16:50.789 12:00:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.789 12:00:56 -- scripts/common.sh@354 -- # echo 1 00:16:50.789 12:00:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:50.789 12:00:56 -- scripts/common.sh@365 -- # decimal 2 00:16:50.789 12:00:56 -- scripts/common.sh@352 -- # local d=2 00:16:50.789 12:00:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.789 12:00:56 -- scripts/common.sh@354 -- # echo 2 00:16:50.789 12:00:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:50.789 12:00:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:50.789 12:00:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:50.789 12:00:56 -- scripts/common.sh@367 -- # return 0 00:16:50.789 12:00:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.789 12:00:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.789 --rc genhtml_branch_coverage=1 00:16:50.789 --rc genhtml_function_coverage=1 00:16:50.789 --rc genhtml_legend=1 00:16:50.789 --rc geninfo_all_blocks=1 00:16:50.789 --rc geninfo_unexecuted_blocks=1 00:16:50.789 00:16:50.789 ' 00:16:50.789 12:00:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.789 --rc genhtml_branch_coverage=1 00:16:50.789 --rc genhtml_function_coverage=1 00:16:50.789 --rc genhtml_legend=1 00:16:50.789 --rc geninfo_all_blocks=1 00:16:50.789 --rc geninfo_unexecuted_blocks=1 00:16:50.789 00:16:50.789 ' 00:16:50.789 12:00:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.789 --rc genhtml_branch_coverage=1 00:16:50.789 --rc genhtml_function_coverage=1 00:16:50.789 --rc genhtml_legend=1 00:16:50.789 --rc geninfo_all_blocks=1 00:16:50.789 --rc geninfo_unexecuted_blocks=1 00:16:50.789 00:16:50.789 ' 00:16:50.789 12:00:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.789 --rc genhtml_branch_coverage=1 00:16:50.789 --rc genhtml_function_coverage=1 00:16:50.789 --rc genhtml_legend=1 00:16:50.789 --rc geninfo_all_blocks=1 00:16:50.789 --rc geninfo_unexecuted_blocks=1 00:16:50.789 00:16:50.789 ' 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:50.789 12:00:56 -- bdev/nbd_common.sh@6 -- # set -e 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@716 -- # uname -s 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:16:50.789 12:00:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.789 12:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.789 12:00:56 -- common/autotest_common.sh@10 -- # set +x 00:16:50.789 ************************************ 00:16:50.789 START TEST raid_function_test_raid0 00:16:50.789 ************************************ 00:16:50.789 12:00:56 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@86 -- # raid_pid=122977 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122977' 00:16:50.789 Process raid pid: 122977 00:16:50.789 12:00:56 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122977 /var/tmp/spdk-raid.sock 00:16:50.789 12:00:56 -- common/autotest_common.sh@829 -- # '[' -z 122977 ']' 00:16:50.789 12:00:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.789 12:00:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.789 12:00:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.789 12:00:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.789 12:00:56 -- common/autotest_common.sh@10 -- # set +x 00:16:50.789 [2024-11-29 12:00:56.298309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:50.789 [2024-11-29 12:00:56.298538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.046 [2024-11-29 12:00:56.441255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.046 [2024-11-29 12:00:56.542739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.304 [2024-11-29 12:00:56.600967] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.868 12:00:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.868 12:00:57 -- common/autotest_common.sh@862 -- # return 0 00:16:51.868 12:00:57 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:16:51.868 12:00:57 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:16:51.868 12:00:57 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:51.868 12:00:57 -- bdev/bdev_raid.sh@70 -- # cat 00:16:51.868 12:00:57 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:52.126 [2024-11-29 12:00:57.593711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:52.126 [2024-11-29 12:00:57.596723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:52.126 [2024-11-29 12:00:57.596829] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:52.126 [2024-11-29 12:00:57.596843] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:52.126 [2024-11-29 12:00:57.597056] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:16:52.126 [2024-11-29 12:00:57.597534] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:52.126 [2024-11-29 12:00:57.597560] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:16:52.126 [2024-11-29 12:00:57.597833] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.126 Base_1 00:16:52.126 Base_2 00:16:52.126 12:00:57 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:52.126 12:00:57 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:52.126 12:00:57 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.383 12:00:57 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:16:52.383 12:00:57 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:16:52.383 12:00:57 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@12 -- # local i 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:52.383 12:00:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:52.640 [2024-11-29 12:00:58.125989] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:52.640 /dev/nbd0 00:16:52.898 12:00:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:52.898 12:00:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:52.898 12:00:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:52.898 12:00:58 -- common/autotest_common.sh@867 -- # local i 00:16:52.898 12:00:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:52.898 12:00:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:52.898 12:00:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:52.898 12:00:58 -- common/autotest_common.sh@871 -- # break 00:16:52.898 12:00:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:52.898 12:00:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:52.898 12:00:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.898 1+0 records in 00:16:52.898 1+0 records out 00:16:52.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313132 s, 13.1 MB/s 00:16:52.898 12:00:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.898 12:00:58 -- common/autotest_common.sh@884 -- # size=4096 00:16:52.898 12:00:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.898 12:00:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:52.898 12:00:58 -- common/autotest_common.sh@887 -- # return 0 00:16:52.898 12:00:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.898 12:00:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:52.898 12:00:58 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:52.898 12:00:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:52.898 12:00:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:53.156 { 00:16:53.156 "nbd_device": "/dev/nbd0", 00:16:53.156 "bdev_name": "raid" 00:16:53.156 } 00:16:53.156 ]' 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:53.156 { 00:16:53.156 "nbd_device": "/dev/nbd0", 00:16:53.156 "bdev_name": "raid" 00:16:53.156 } 00:16:53.156 ]' 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@65 -- # count=1 00:16:53.156 12:00:58 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@98 -- # count=1 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@20 -- # local blksize 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:16:53.156 4096+0 records in 00:16:53.156 4096+0 records out 00:16:53.156 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.024411 s, 85.9 MB/s 00:16:53.156 12:00:58 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:53.414 4096+0 records in 00:16:53.414 4096+0 records out 00:16:53.414 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.287395 s, 7.3 MB/s 00:16:53.414 12:00:58 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:16:53.414 12:00:58 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:53.414 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:16:53.414 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:53.415 128+0 records in 00:16:53.415 128+0 records out 00:16:53.415 65536 bytes (66 kB, 64 KiB) copied, 0.000367632 s, 178 MB/s 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:53.415 2035+0 records in 00:16:53.415 2035+0 records out 00:16:53.415 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00650441 s, 160 MB/s 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:53.415 456+0 records in 00:16:53.415 456+0 records out 00:16:53.415 233472 bytes (233 kB, 228 KiB) copied, 0.00112002 s, 208 MB/s 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:53.415 12:00:58 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:53.673 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:53.673 12:00:58 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:53.673 12:00:58 -- bdev/bdev_raid.sh@53 -- # return 0 00:16:53.673 12:00:58 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:53.673 12:00:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:53.673 12:00:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:53.673 12:00:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.673 12:00:58 -- bdev/nbd_common.sh@51 -- # local i 00:16:53.673 12:00:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.673 12:00:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:53.931 [2024-11-29 12:00:59.197241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@41 -- # break 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.931 12:00:59 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:53.931 12:00:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@65 -- # true 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@65 -- # count=0 00:16:54.190 12:00:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:54.190 12:00:59 -- bdev/bdev_raid.sh@106 -- # count=0 00:16:54.190 12:00:59 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:16:54.190 12:00:59 -- bdev/bdev_raid.sh@111 -- # killprocess 122977 00:16:54.190 12:00:59 -- common/autotest_common.sh@936 -- # '[' -z 122977 ']' 00:16:54.190 12:00:59 -- common/autotest_common.sh@940 -- # kill -0 122977 00:16:54.190 12:00:59 -- common/autotest_common.sh@941 -- # uname 00:16:54.190 12:00:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.190 12:00:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122977 00:16:54.190 12:00:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.190 12:00:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.190 12:00:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122977' 00:16:54.190 killing process with pid 122977 00:16:54.190 12:00:59 -- common/autotest_common.sh@955 -- # kill 122977 00:16:54.190 [2024-11-29 12:00:59.561238] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.190 [2024-11-29 12:00:59.561376] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.190 12:00:59 -- common/autotest_common.sh@960 -- # wait 122977 00:16:54.190 [2024-11-29 12:00:59.561460] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.190 [2024-11-29 12:00:59.561475] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:16:54.190 [2024-11-29 12:00:59.585198] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@113 -- # return 0 00:16:54.450 00:16:54.450 real 0m3.582s 00:16:54.450 user 0m4.985s 00:16:54.450 sys 0m0.919s 00:16:54.450 12:00:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:54.450 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:54.450 ************************************ 00:16:54.450 END TEST raid_function_test_raid0 00:16:54.450 ************************************ 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:16:54.450 12:00:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.450 12:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.450 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:54.450 ************************************ 00:16:54.450 START TEST raid_function_test_concat 00:16:54.450 ************************************ 00:16:54.450 12:00:59 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@86 -- # raid_pid=123130 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 123130' 00:16:54.450 Process raid pid: 123130 00:16:54.450 12:00:59 -- bdev/bdev_raid.sh@88 -- # waitforlisten 123130 /var/tmp/spdk-raid.sock 00:16:54.450 12:00:59 -- common/autotest_common.sh@829 -- # '[' -z 123130 ']' 00:16:54.450 12:00:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.450 12:00:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.450 12:00:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.450 12:00:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.450 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:54.450 [2024-11-29 12:00:59.944852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:54.450 [2024-11-29 12:00:59.945058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.709 [2024-11-29 12:01:00.086793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.709 [2024-11-29 12:01:00.183644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.967 [2024-11-29 12:01:00.240569] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.534 12:01:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.534 12:01:00 -- common/autotest_common.sh@862 -- # return 0 00:16:55.534 12:01:00 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:16:55.534 12:01:00 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:16:55.534 12:01:00 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:55.534 12:01:00 -- bdev/bdev_raid.sh@70 -- # cat 00:16:55.534 12:01:00 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:55.792 [2024-11-29 12:01:01.269985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:55.792 [2024-11-29 12:01:01.272385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:55.792 [2024-11-29 12:01:01.272523] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:55.792 [2024-11-29 12:01:01.272539] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:55.792 [2024-11-29 12:01:01.272736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:16:55.792 [2024-11-29 12:01:01.273195] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:55.792 [2024-11-29 12:01:01.273222] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:16:55.792 [2024-11-29 12:01:01.273417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.792 Base_1 00:16:55.792 Base_2 00:16:55.792 12:01:01 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:55.792 12:01:01 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:55.792 12:01:01 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.359 12:01:01 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:16:56.359 12:01:01 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:16:56.359 12:01:01 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@12 -- # local i 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:56.359 [2024-11-29 12:01:01.798127] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:56.359 /dev/nbd0 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:56.359 12:01:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:56.359 12:01:01 -- common/autotest_common.sh@867 -- # local i 00:16:56.359 12:01:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:56.359 12:01:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:56.359 12:01:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:56.359 12:01:01 -- common/autotest_common.sh@871 -- # break 00:16:56.359 12:01:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:56.359 12:01:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:56.359 12:01:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.359 1+0 records in 00:16:56.359 1+0 records out 00:16:56.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759743 s, 5.4 MB/s 00:16:56.359 12:01:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.359 12:01:01 -- common/autotest_common.sh@884 -- # size=4096 00:16:56.359 12:01:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.359 12:01:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:56.359 12:01:01 -- common/autotest_common.sh@887 -- # return 0 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.359 12:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.359 12:01:01 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:56.360 12:01:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:56.360 12:01:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:56.618 12:01:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:56.618 { 00:16:56.618 "nbd_device": "/dev/nbd0", 00:16:56.618 "bdev_name": "raid" 00:16:56.618 } 00:16:56.618 ]' 00:16:56.618 12:01:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:56.618 12:01:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:56.618 { 00:16:56.618 "nbd_device": "/dev/nbd0", 00:16:56.618 "bdev_name": "raid" 00:16:56.618 } 00:16:56.618 ]' 00:16:56.876 12:01:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:56.876 12:01:02 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:56.876 12:01:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:56.876 12:01:02 -- bdev/nbd_common.sh@65 -- # count=1 00:16:56.876 12:01:02 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@98 -- # count=1 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@20 -- # local blksize 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:16:56.876 4096+0 records in 00:16:56.876 4096+0 records out 00:16:56.876 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0247378 s, 84.8 MB/s 00:16:56.876 12:01:02 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:57.135 4096+0 records in 00:16:57.135 4096+0 records out 00:16:57.135 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.252655 s, 8.3 MB/s 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:57.135 128+0 records in 00:16:57.135 128+0 records out 00:16:57.135 65536 bytes (66 kB, 64 KiB) copied, 0.000732708 s, 89.4 MB/s 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:57.135 2035+0 records in 00:16:57.135 2035+0 records out 00:16:57.135 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00510426 s, 204 MB/s 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:57.135 456+0 records in 00:16:57.135 456+0 records out 00:16:57.135 233472 bytes (233 kB, 228 KiB) copied, 0.00171308 s, 136 MB/s 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:57.135 12:01:02 -- bdev/bdev_raid.sh@53 -- # return 0 00:16:57.136 12:01:02 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:57.136 12:01:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:57.136 12:01:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.136 12:01:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.136 12:01:02 -- bdev/nbd_common.sh@51 -- # local i 00:16:57.136 12:01:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.136 12:01:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:57.394 [2024-11-29 12:01:02.836350] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@41 -- # break 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.394 12:01:02 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:57.394 12:01:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@65 -- # true 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@65 -- # count=0 00:16:57.654 12:01:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:57.654 12:01:03 -- bdev/bdev_raid.sh@106 -- # count=0 00:16:57.654 12:01:03 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:16:57.654 12:01:03 -- bdev/bdev_raid.sh@111 -- # killprocess 123130 00:16:57.654 12:01:03 -- common/autotest_common.sh@936 -- # '[' -z 123130 ']' 00:16:57.654 12:01:03 -- common/autotest_common.sh@940 -- # kill -0 123130 00:16:57.654 12:01:03 -- common/autotest_common.sh@941 -- # uname 00:16:57.654 12:01:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.654 12:01:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123130 00:16:57.912 12:01:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:57.912 12:01:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:57.912 12:01:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123130' 00:16:57.912 killing process with pid 123130 00:16:57.912 12:01:03 -- common/autotest_common.sh@955 -- # kill 123130 00:16:57.912 [2024-11-29 12:01:03.171678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.912 12:01:03 -- common/autotest_common.sh@960 -- # wait 123130 00:16:57.912 [2024-11-29 12:01:03.171816] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.912 [2024-11-29 12:01:03.171893] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.912 [2024-11-29 12:01:03.171907] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:16:57.912 [2024-11-29 12:01:03.194819] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@113 -- # return 0 00:16:58.229 00:16:58.229 real 0m3.552s 00:16:58.229 user 0m4.981s 00:16:58.229 sys 0m0.931s 00:16:58.229 12:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:58.229 12:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.229 ************************************ 00:16:58.229 END TEST raid_function_test_concat 00:16:58.229 ************************************ 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:16:58.229 12:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:58.229 12:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.229 12:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.229 ************************************ 00:16:58.229 START TEST raid0_resize_test 00:16:58.229 ************************************ 00:16:58.229 12:01:03 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@301 -- # raid_pid=123279 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:58.229 Process raid pid: 123279 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 123279' 00:16:58.229 12:01:03 -- bdev/bdev_raid.sh@303 -- # waitforlisten 123279 /var/tmp/spdk-raid.sock 00:16:58.229 12:01:03 -- common/autotest_common.sh@829 -- # '[' -z 123279 ']' 00:16:58.229 12:01:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:58.229 12:01:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:58.229 12:01:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:58.229 12:01:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.229 12:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.229 [2024-11-29 12:01:03.559099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:58.229 [2024-11-29 12:01:03.559348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.229 [2024-11-29 12:01:03.705313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.488 [2024-11-29 12:01:03.798032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.488 [2024-11-29 12:01:03.854675] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.055 12:01:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.055 12:01:04 -- common/autotest_common.sh@862 -- # return 0 00:16:59.055 12:01:04 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:16:59.314 Base_1 00:16:59.314 12:01:04 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:16:59.573 Base_2 00:16:59.573 12:01:05 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:16:59.830 [2024-11-29 12:01:05.249263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:59.830 [2024-11-29 12:01:05.251682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:59.830 [2024-11-29 12:01:05.251759] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:59.830 [2024-11-29 12:01:05.251782] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:59.830 [2024-11-29 12:01:05.252018] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:16:59.830 [2024-11-29 12:01:05.252522] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:59.830 [2024-11-29 12:01:05.252545] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:16:59.830 [2024-11-29 12:01:05.252777] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.830 12:01:05 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:17:00.089 [2024-11-29 12:01:05.485257] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:17:00.089 [2024-11-29 12:01:05.485303] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:17:00.089 true 00:17:00.089 12:01:05 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:17:00.089 12:01:05 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:17:00.347 [2024-11-29 12:01:05.729475] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.347 12:01:05 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:17:00.347 12:01:05 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:17:00.347 12:01:05 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:17:00.347 12:01:05 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:17:00.606 [2024-11-29 12:01:05.973343] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:17:00.606 [2024-11-29 12:01:05.973387] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:17:00.606 [2024-11-29 12:01:05.973445] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:17:00.606 [2024-11-29 12:01:05.973518] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:00.606 true 00:17:00.606 12:01:05 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:17:00.606 12:01:05 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:17:00.864 [2024-11-29 12:01:06.249595] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.865 12:01:06 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:17:00.865 12:01:06 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:17:00.865 12:01:06 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:17:00.865 12:01:06 -- bdev/bdev_raid.sh@332 -- # killprocess 123279 00:17:00.865 12:01:06 -- common/autotest_common.sh@936 -- # '[' -z 123279 ']' 00:17:00.865 12:01:06 -- common/autotest_common.sh@940 -- # kill -0 123279 00:17:00.865 12:01:06 -- common/autotest_common.sh@941 -- # uname 00:17:00.865 12:01:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.865 12:01:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123279 00:17:00.865 12:01:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:00.865 killing process with pid 123279 00:17:00.865 12:01:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:00.865 12:01:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123279' 00:17:00.865 12:01:06 -- common/autotest_common.sh@955 -- # kill 123279 00:17:00.865 [2024-11-29 12:01:06.293678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.865 12:01:06 -- common/autotest_common.sh@960 -- # wait 123279 00:17:00.865 [2024-11-29 12:01:06.293821] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.865 [2024-11-29 12:01:06.293898] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.865 [2024-11-29 12:01:06.293912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:17:00.865 [2024-11-29 12:01:06.294494] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@334 -- # return 0 00:17:01.123 00:17:01.123 real 0m3.035s 00:17:01.123 user 0m4.718s 00:17:01.123 sys 0m0.526s 00:17:01.123 12:01:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.123 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:01.123 ************************************ 00:17:01.123 END TEST raid0_resize_test 00:17:01.123 ************************************ 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:17:01.123 12:01:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:01.123 12:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.123 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:01.123 ************************************ 00:17:01.123 START TEST raid_state_function_test 00:17:01.123 ************************************ 00:17:01.123 12:01:06 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=123355 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:01.123 Process raid pid: 123355 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123355' 00:17:01.123 12:01:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123355 /var/tmp/spdk-raid.sock 00:17:01.123 12:01:06 -- common/autotest_common.sh@829 -- # '[' -z 123355 ']' 00:17:01.123 12:01:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:01.123 12:01:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:01.123 12:01:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:01.123 12:01:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.123 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:01.382 [2024-11-29 12:01:06.645884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:01.382 [2024-11-29 12:01:06.646704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.382 [2024-11-29 12:01:06.789022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.382 [2024-11-29 12:01:06.885922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.641 [2024-11-29 12:01:06.941953] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.207 12:01:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.207 12:01:07 -- common/autotest_common.sh@862 -- # return 0 00:17:02.207 12:01:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:02.466 [2024-11-29 12:01:07.884528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.466 [2024-11-29 12:01:07.884645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.466 [2024-11-29 12:01:07.884670] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.466 [2024-11-29 12:01:07.884690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.466 12:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.034 12:01:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.034 "name": "Existed_Raid", 00:17:03.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.034 "strip_size_kb": 64, 00:17:03.034 "state": "configuring", 00:17:03.034 "raid_level": "raid0", 00:17:03.034 "superblock": false, 00:17:03.034 "num_base_bdevs": 2, 00:17:03.034 "num_base_bdevs_discovered": 0, 00:17:03.034 "num_base_bdevs_operational": 2, 00:17:03.034 "base_bdevs_list": [ 00:17:03.034 { 00:17:03.034 "name": "BaseBdev1", 00:17:03.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.034 "is_configured": false, 00:17:03.034 "data_offset": 0, 00:17:03.034 "data_size": 0 00:17:03.034 }, 00:17:03.034 { 00:17:03.034 "name": "BaseBdev2", 00:17:03.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.034 "is_configured": false, 00:17:03.034 "data_offset": 0, 00:17:03.034 "data_size": 0 00:17:03.034 } 00:17:03.034 ] 00:17:03.034 }' 00:17:03.034 12:01:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.034 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:17:03.601 12:01:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:03.602 [2024-11-29 12:01:09.100638] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.602 [2024-11-29 12:01:09.100719] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:03.860 12:01:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:03.860 [2024-11-29 12:01:09.336749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.860 [2024-11-29 12:01:09.336858] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.860 [2024-11-29 12:01:09.336881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.860 [2024-11-29 12:01:09.336908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.860 12:01:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:04.118 [2024-11-29 12:01:09.581055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.118 BaseBdev1 00:17:04.118 12:01:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:04.118 12:01:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:04.118 12:01:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.118 12:01:09 -- common/autotest_common.sh@899 -- # local i 00:17:04.118 12:01:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.118 12:01:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.118 12:01:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.377 12:01:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:04.635 [ 00:17:04.635 { 00:17:04.635 "name": "BaseBdev1", 00:17:04.635 "aliases": [ 00:17:04.635 "34d098a4-5494-4712-86a4-fba8fd4ee949" 00:17:04.635 ], 00:17:04.635 "product_name": "Malloc disk", 00:17:04.635 "block_size": 512, 00:17:04.635 "num_blocks": 65536, 00:17:04.635 "uuid": "34d098a4-5494-4712-86a4-fba8fd4ee949", 00:17:04.635 "assigned_rate_limits": { 00:17:04.635 "rw_ios_per_sec": 0, 00:17:04.635 "rw_mbytes_per_sec": 0, 00:17:04.635 "r_mbytes_per_sec": 0, 00:17:04.635 "w_mbytes_per_sec": 0 00:17:04.635 }, 00:17:04.635 "claimed": true, 00:17:04.635 "claim_type": "exclusive_write", 00:17:04.635 "zoned": false, 00:17:04.635 "supported_io_types": { 00:17:04.635 "read": true, 00:17:04.635 "write": true, 00:17:04.635 "unmap": true, 00:17:04.635 "write_zeroes": true, 00:17:04.635 "flush": true, 00:17:04.635 "reset": true, 00:17:04.635 "compare": false, 00:17:04.635 "compare_and_write": false, 00:17:04.635 "abort": true, 00:17:04.635 "nvme_admin": false, 00:17:04.635 "nvme_io": false 00:17:04.635 }, 00:17:04.635 "memory_domains": [ 00:17:04.635 { 00:17:04.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.635 "dma_device_type": 2 00:17:04.635 } 00:17:04.635 ], 00:17:04.635 "driver_specific": {} 00:17:04.635 } 00:17:04.635 ] 00:17:04.635 12:01:10 -- common/autotest_common.sh@905 -- # return 0 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.635 12:01:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.894 12:01:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.894 "name": "Existed_Raid", 00:17:04.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.894 "strip_size_kb": 64, 00:17:04.894 "state": "configuring", 00:17:04.894 "raid_level": "raid0", 00:17:04.894 "superblock": false, 00:17:04.894 "num_base_bdevs": 2, 00:17:04.894 "num_base_bdevs_discovered": 1, 00:17:04.894 "num_base_bdevs_operational": 2, 00:17:04.894 "base_bdevs_list": [ 00:17:04.894 { 00:17:04.894 "name": "BaseBdev1", 00:17:04.894 "uuid": "34d098a4-5494-4712-86a4-fba8fd4ee949", 00:17:04.894 "is_configured": true, 00:17:04.894 "data_offset": 0, 00:17:04.894 "data_size": 65536 00:17:04.894 }, 00:17:04.894 { 00:17:04.894 "name": "BaseBdev2", 00:17:04.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.894 "is_configured": false, 00:17:04.894 "data_offset": 0, 00:17:04.894 "data_size": 0 00:17:04.894 } 00:17:04.894 ] 00:17:04.894 }' 00:17:04.894 12:01:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.894 12:01:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.461 12:01:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:05.720 [2024-11-29 12:01:11.209504] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.720 [2024-11-29 12:01:11.209592] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:05.720 12:01:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:05.720 12:01:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:05.978 [2024-11-29 12:01:11.477696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.979 [2024-11-29 12:01:11.480151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.979 [2024-11-29 12:01:11.480290] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.237 "name": "Existed_Raid", 00:17:06.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.237 "strip_size_kb": 64, 00:17:06.237 "state": "configuring", 00:17:06.237 "raid_level": "raid0", 00:17:06.237 "superblock": false, 00:17:06.237 "num_base_bdevs": 2, 00:17:06.237 "num_base_bdevs_discovered": 1, 00:17:06.237 "num_base_bdevs_operational": 2, 00:17:06.237 "base_bdevs_list": [ 00:17:06.237 { 00:17:06.237 "name": "BaseBdev1", 00:17:06.237 "uuid": "34d098a4-5494-4712-86a4-fba8fd4ee949", 00:17:06.237 "is_configured": true, 00:17:06.237 "data_offset": 0, 00:17:06.237 "data_size": 65536 00:17:06.237 }, 00:17:06.237 { 00:17:06.237 "name": "BaseBdev2", 00:17:06.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.237 "is_configured": false, 00:17:06.237 "data_offset": 0, 00:17:06.237 "data_size": 0 00:17:06.237 } 00:17:06.237 ] 00:17:06.237 }' 00:17:06.237 12:01:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.237 12:01:11 -- common/autotest_common.sh@10 -- # set +x 00:17:07.170 12:01:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:07.170 [2024-11-29 12:01:12.566826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.170 [2024-11-29 12:01:12.566902] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:07.170 [2024-11-29 12:01:12.566918] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:07.170 [2024-11-29 12:01:12.567169] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:17:07.170 [2024-11-29 12:01:12.567801] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:07.170 [2024-11-29 12:01:12.567835] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:07.170 [2024-11-29 12:01:12.568271] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.170 BaseBdev2 00:17:07.170 12:01:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:07.170 12:01:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:07.170 12:01:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:07.170 12:01:12 -- common/autotest_common.sh@899 -- # local i 00:17:07.170 12:01:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:07.170 12:01:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:07.170 12:01:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.426 12:01:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.684 [ 00:17:07.684 { 00:17:07.684 "name": "BaseBdev2", 00:17:07.684 "aliases": [ 00:17:07.684 "7b91d547-6ae1-4ea0-bc1e-330521f060c9" 00:17:07.684 ], 00:17:07.684 "product_name": "Malloc disk", 00:17:07.684 "block_size": 512, 00:17:07.684 "num_blocks": 65536, 00:17:07.684 "uuid": "7b91d547-6ae1-4ea0-bc1e-330521f060c9", 00:17:07.684 "assigned_rate_limits": { 00:17:07.684 "rw_ios_per_sec": 0, 00:17:07.684 "rw_mbytes_per_sec": 0, 00:17:07.684 "r_mbytes_per_sec": 0, 00:17:07.684 "w_mbytes_per_sec": 0 00:17:07.684 }, 00:17:07.684 "claimed": true, 00:17:07.684 "claim_type": "exclusive_write", 00:17:07.684 "zoned": false, 00:17:07.684 "supported_io_types": { 00:17:07.684 "read": true, 00:17:07.684 "write": true, 00:17:07.684 "unmap": true, 00:17:07.684 "write_zeroes": true, 00:17:07.684 "flush": true, 00:17:07.684 "reset": true, 00:17:07.684 "compare": false, 00:17:07.684 "compare_and_write": false, 00:17:07.684 "abort": true, 00:17:07.684 "nvme_admin": false, 00:17:07.684 "nvme_io": false 00:17:07.684 }, 00:17:07.684 "memory_domains": [ 00:17:07.684 { 00:17:07.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.684 "dma_device_type": 2 00:17:07.684 } 00:17:07.684 ], 00:17:07.684 "driver_specific": {} 00:17:07.684 } 00:17:07.684 ] 00:17:07.684 12:01:13 -- common/autotest_common.sh@905 -- # return 0 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.684 12:01:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.941 12:01:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.941 "name": "Existed_Raid", 00:17:07.941 "uuid": "4d1f90a4-93d5-490c-a7d5-a3a57d670695", 00:17:07.941 "strip_size_kb": 64, 00:17:07.942 "state": "online", 00:17:07.942 "raid_level": "raid0", 00:17:07.942 "superblock": false, 00:17:07.942 "num_base_bdevs": 2, 00:17:07.942 "num_base_bdevs_discovered": 2, 00:17:07.942 "num_base_bdevs_operational": 2, 00:17:07.942 "base_bdevs_list": [ 00:17:07.942 { 00:17:07.942 "name": "BaseBdev1", 00:17:07.942 "uuid": "34d098a4-5494-4712-86a4-fba8fd4ee949", 00:17:07.942 "is_configured": true, 00:17:07.942 "data_offset": 0, 00:17:07.942 "data_size": 65536 00:17:07.942 }, 00:17:07.942 { 00:17:07.942 "name": "BaseBdev2", 00:17:07.942 "uuid": "7b91d547-6ae1-4ea0-bc1e-330521f060c9", 00:17:07.942 "is_configured": true, 00:17:07.942 "data_offset": 0, 00:17:07.942 "data_size": 65536 00:17:07.942 } 00:17:07.942 ] 00:17:07.942 }' 00:17:07.942 12:01:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.942 12:01:13 -- common/autotest_common.sh@10 -- # set +x 00:17:08.509 12:01:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.767 [2024-11-29 12:01:14.251412] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.767 [2024-11-29 12:01:14.251475] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.767 [2024-11-29 12:01:14.251581] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.026 12:01:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.284 12:01:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.284 "name": "Existed_Raid", 00:17:09.284 "uuid": "4d1f90a4-93d5-490c-a7d5-a3a57d670695", 00:17:09.284 "strip_size_kb": 64, 00:17:09.284 "state": "offline", 00:17:09.284 "raid_level": "raid0", 00:17:09.284 "superblock": false, 00:17:09.284 "num_base_bdevs": 2, 00:17:09.284 "num_base_bdevs_discovered": 1, 00:17:09.284 "num_base_bdevs_operational": 1, 00:17:09.284 "base_bdevs_list": [ 00:17:09.284 { 00:17:09.284 "name": null, 00:17:09.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.284 "is_configured": false, 00:17:09.284 "data_offset": 0, 00:17:09.284 "data_size": 65536 00:17:09.284 }, 00:17:09.284 { 00:17:09.284 "name": "BaseBdev2", 00:17:09.284 "uuid": "7b91d547-6ae1-4ea0-bc1e-330521f060c9", 00:17:09.284 "is_configured": true, 00:17:09.284 "data_offset": 0, 00:17:09.284 "data_size": 65536 00:17:09.284 } 00:17:09.284 ] 00:17:09.284 }' 00:17:09.284 12:01:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.284 12:01:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.849 12:01:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:09.849 12:01:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:09.849 12:01:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.849 12:01:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:10.106 12:01:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:10.106 12:01:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.106 12:01:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:10.364 [2024-11-29 12:01:15.706864] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.364 [2024-11-29 12:01:15.706998] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:10.364 12:01:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:10.364 12:01:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:10.364 12:01:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.364 12:01:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.622 12:01:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:10.622 12:01:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:10.622 12:01:16 -- bdev/bdev_raid.sh@287 -- # killprocess 123355 00:17:10.622 12:01:16 -- common/autotest_common.sh@936 -- # '[' -z 123355 ']' 00:17:10.622 12:01:16 -- common/autotest_common.sh@940 -- # kill -0 123355 00:17:10.622 12:01:16 -- common/autotest_common.sh@941 -- # uname 00:17:10.622 12:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.622 12:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123355 00:17:10.622 12:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:10.622 12:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:10.622 12:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123355' 00:17:10.622 killing process with pid 123355 00:17:10.622 12:01:16 -- common/autotest_common.sh@955 -- # kill 123355 00:17:10.622 12:01:16 -- common/autotest_common.sh@960 -- # wait 123355 00:17:10.622 [2024-11-29 12:01:16.039774] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.622 [2024-11-29 12:01:16.039861] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:10.880 00:17:10.880 real 0m9.712s 00:17:10.880 user 0m17.634s 00:17:10.880 sys 0m1.221s 00:17:10.880 12:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:10.880 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:17:10.880 ************************************ 00:17:10.880 END TEST raid_state_function_test 00:17:10.880 ************************************ 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:17:10.880 12:01:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:10.880 12:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.880 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:17:10.880 ************************************ 00:17:10.880 START TEST raid_state_function_test_sb 00:17:10.880 ************************************ 00:17:10.880 12:01:16 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:10.880 12:01:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=123676 00:17:10.881 Process raid pid: 123676 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123676' 00:17:10.881 12:01:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123676 /var/tmp/spdk-raid.sock 00:17:10.881 12:01:16 -- common/autotest_common.sh@829 -- # '[' -z 123676 ']' 00:17:10.881 12:01:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:10.881 12:01:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:10.881 12:01:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:10.881 12:01:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.881 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.139 [2024-11-29 12:01:16.421166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:11.139 [2024-11-29 12:01:16.421509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.139 [2024-11-29 12:01:16.567814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.397 [2024-11-29 12:01:16.667188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.397 [2024-11-29 12:01:16.722420] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.963 12:01:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.963 12:01:17 -- common/autotest_common.sh@862 -- # return 0 00:17:11.963 12:01:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:12.221 [2024-11-29 12:01:17.566555] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.221 [2024-11-29 12:01:17.566690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.221 [2024-11-29 12:01:17.566706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.221 [2024-11-29 12:01:17.566730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.221 12:01:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.479 12:01:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.479 "name": "Existed_Raid", 00:17:12.479 "uuid": "ab8c6e9b-a1a5-4d55-a5c3-ea9a7121e25d", 00:17:12.479 "strip_size_kb": 64, 00:17:12.479 "state": "configuring", 00:17:12.479 "raid_level": "raid0", 00:17:12.479 "superblock": true, 00:17:12.479 "num_base_bdevs": 2, 00:17:12.479 "num_base_bdevs_discovered": 0, 00:17:12.479 "num_base_bdevs_operational": 2, 00:17:12.479 "base_bdevs_list": [ 00:17:12.479 { 00:17:12.479 "name": "BaseBdev1", 00:17:12.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.479 "is_configured": false, 00:17:12.479 "data_offset": 0, 00:17:12.479 "data_size": 0 00:17:12.479 }, 00:17:12.479 { 00:17:12.479 "name": "BaseBdev2", 00:17:12.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.479 "is_configured": false, 00:17:12.479 "data_offset": 0, 00:17:12.479 "data_size": 0 00:17:12.479 } 00:17:12.479 ] 00:17:12.479 }' 00:17:12.479 12:01:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.479 12:01:17 -- common/autotest_common.sh@10 -- # set +x 00:17:13.045 12:01:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.304 [2024-11-29 12:01:18.730602] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.304 [2024-11-29 12:01:18.730661] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:13.304 12:01:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:13.563 [2024-11-29 12:01:19.002761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.563 [2024-11-29 12:01:19.002886] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.563 [2024-11-29 12:01:19.002902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.563 [2024-11-29 12:01:19.002931] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.563 12:01:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.822 [2024-11-29 12:01:19.246271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.822 BaseBdev1 00:17:13.822 12:01:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:13.822 12:01:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:13.822 12:01:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:13.822 12:01:19 -- common/autotest_common.sh@899 -- # local i 00:17:13.823 12:01:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:13.823 12:01:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:13.823 12:01:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.080 12:01:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.339 [ 00:17:14.339 { 00:17:14.339 "name": "BaseBdev1", 00:17:14.339 "aliases": [ 00:17:14.339 "f9aa826c-5ff4-4acd-a6d6-f7ccbeab12b7" 00:17:14.339 ], 00:17:14.339 "product_name": "Malloc disk", 00:17:14.339 "block_size": 512, 00:17:14.339 "num_blocks": 65536, 00:17:14.339 "uuid": "f9aa826c-5ff4-4acd-a6d6-f7ccbeab12b7", 00:17:14.339 "assigned_rate_limits": { 00:17:14.339 "rw_ios_per_sec": 0, 00:17:14.339 "rw_mbytes_per_sec": 0, 00:17:14.339 "r_mbytes_per_sec": 0, 00:17:14.339 "w_mbytes_per_sec": 0 00:17:14.339 }, 00:17:14.339 "claimed": true, 00:17:14.339 "claim_type": "exclusive_write", 00:17:14.339 "zoned": false, 00:17:14.339 "supported_io_types": { 00:17:14.339 "read": true, 00:17:14.339 "write": true, 00:17:14.339 "unmap": true, 00:17:14.339 "write_zeroes": true, 00:17:14.339 "flush": true, 00:17:14.339 "reset": true, 00:17:14.339 "compare": false, 00:17:14.339 "compare_and_write": false, 00:17:14.339 "abort": true, 00:17:14.339 "nvme_admin": false, 00:17:14.339 "nvme_io": false 00:17:14.339 }, 00:17:14.339 "memory_domains": [ 00:17:14.339 { 00:17:14.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.339 "dma_device_type": 2 00:17:14.339 } 00:17:14.339 ], 00:17:14.339 "driver_specific": {} 00:17:14.339 } 00:17:14.339 ] 00:17:14.339 12:01:19 -- common/autotest_common.sh@905 -- # return 0 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.339 12:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.598 12:01:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.598 "name": "Existed_Raid", 00:17:14.598 "uuid": "df6b4506-9342-42a0-ac26-e8a3d1d01ba7", 00:17:14.598 "strip_size_kb": 64, 00:17:14.598 "state": "configuring", 00:17:14.598 "raid_level": "raid0", 00:17:14.598 "superblock": true, 00:17:14.598 "num_base_bdevs": 2, 00:17:14.598 "num_base_bdevs_discovered": 1, 00:17:14.598 "num_base_bdevs_operational": 2, 00:17:14.598 "base_bdevs_list": [ 00:17:14.598 { 00:17:14.598 "name": "BaseBdev1", 00:17:14.598 "uuid": "f9aa826c-5ff4-4acd-a6d6-f7ccbeab12b7", 00:17:14.598 "is_configured": true, 00:17:14.598 "data_offset": 2048, 00:17:14.598 "data_size": 63488 00:17:14.598 }, 00:17:14.598 { 00:17:14.598 "name": "BaseBdev2", 00:17:14.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.598 "is_configured": false, 00:17:14.598 "data_offset": 0, 00:17:14.598 "data_size": 0 00:17:14.598 } 00:17:14.598 ] 00:17:14.598 }' 00:17:14.598 12:01:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.598 12:01:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.170 12:01:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.430 [2024-11-29 12:01:20.898821] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.430 [2024-11-29 12:01:20.898921] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:15.430 12:01:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:15.430 12:01:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.690 12:01:21 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:15.949 BaseBdev1 00:17:15.949 12:01:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:15.949 12:01:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:15.949 12:01:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:15.949 12:01:21 -- common/autotest_common.sh@899 -- # local i 00:17:15.949 12:01:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:15.949 12:01:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:15.949 12:01:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.207 12:01:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:16.466 [ 00:17:16.466 { 00:17:16.466 "name": "BaseBdev1", 00:17:16.466 "aliases": [ 00:17:16.466 "bd43f447-660e-4a88-873c-85523f604eef" 00:17:16.466 ], 00:17:16.466 "product_name": "Malloc disk", 00:17:16.466 "block_size": 512, 00:17:16.466 "num_blocks": 65536, 00:17:16.466 "uuid": "bd43f447-660e-4a88-873c-85523f604eef", 00:17:16.466 "assigned_rate_limits": { 00:17:16.466 "rw_ios_per_sec": 0, 00:17:16.466 "rw_mbytes_per_sec": 0, 00:17:16.466 "r_mbytes_per_sec": 0, 00:17:16.466 "w_mbytes_per_sec": 0 00:17:16.466 }, 00:17:16.466 "claimed": false, 00:17:16.466 "zoned": false, 00:17:16.466 "supported_io_types": { 00:17:16.466 "read": true, 00:17:16.466 "write": true, 00:17:16.466 "unmap": true, 00:17:16.466 "write_zeroes": true, 00:17:16.466 "flush": true, 00:17:16.466 "reset": true, 00:17:16.466 "compare": false, 00:17:16.466 "compare_and_write": false, 00:17:16.466 "abort": true, 00:17:16.466 "nvme_admin": false, 00:17:16.466 "nvme_io": false 00:17:16.466 }, 00:17:16.466 "memory_domains": [ 00:17:16.466 { 00:17:16.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.466 "dma_device_type": 2 00:17:16.466 } 00:17:16.466 ], 00:17:16.466 "driver_specific": {} 00:17:16.466 } 00:17:16.466 ] 00:17:16.466 12:01:21 -- common/autotest_common.sh@905 -- # return 0 00:17:16.466 12:01:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:16.724 [2024-11-29 12:01:22.058694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.724 [2024-11-29 12:01:22.060772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.724 [2024-11-29 12:01:22.060836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.724 12:01:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.982 12:01:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.982 "name": "Existed_Raid", 00:17:16.982 "uuid": "a6e8e792-6e88-4eb3-b550-2e84445b4f2d", 00:17:16.982 "strip_size_kb": 64, 00:17:16.982 "state": "configuring", 00:17:16.982 "raid_level": "raid0", 00:17:16.982 "superblock": true, 00:17:16.982 "num_base_bdevs": 2, 00:17:16.983 "num_base_bdevs_discovered": 1, 00:17:16.983 "num_base_bdevs_operational": 2, 00:17:16.983 "base_bdevs_list": [ 00:17:16.983 { 00:17:16.983 "name": "BaseBdev1", 00:17:16.983 "uuid": "bd43f447-660e-4a88-873c-85523f604eef", 00:17:16.983 "is_configured": true, 00:17:16.983 "data_offset": 2048, 00:17:16.983 "data_size": 63488 00:17:16.983 }, 00:17:16.983 { 00:17:16.983 "name": "BaseBdev2", 00:17:16.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.983 "is_configured": false, 00:17:16.983 "data_offset": 0, 00:17:16.983 "data_size": 0 00:17:16.983 } 00:17:16.983 ] 00:17:16.983 }' 00:17:16.983 12:01:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.983 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:17:17.549 12:01:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.808 [2024-11-29 12:01:23.226412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.808 [2024-11-29 12:01:23.226703] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:17:17.808 [2024-11-29 12:01:23.226724] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:17.808 [2024-11-29 12:01:23.226926] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:17.808 [2024-11-29 12:01:23.227437] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:17:17.808 [2024-11-29 12:01:23.227469] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:17:17.808 BaseBdev2 00:17:17.808 [2024-11-29 12:01:23.227698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.808 12:01:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:17.808 12:01:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:17.808 12:01:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:17.808 12:01:23 -- common/autotest_common.sh@899 -- # local i 00:17:17.808 12:01:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:17.808 12:01:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:17.808 12:01:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.067 12:01:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:18.325 [ 00:17:18.325 { 00:17:18.325 "name": "BaseBdev2", 00:17:18.325 "aliases": [ 00:17:18.325 "af5ddbf2-3fba-4076-a23c-04463029633d" 00:17:18.325 ], 00:17:18.325 "product_name": "Malloc disk", 00:17:18.325 "block_size": 512, 00:17:18.325 "num_blocks": 65536, 00:17:18.325 "uuid": "af5ddbf2-3fba-4076-a23c-04463029633d", 00:17:18.325 "assigned_rate_limits": { 00:17:18.325 "rw_ios_per_sec": 0, 00:17:18.325 "rw_mbytes_per_sec": 0, 00:17:18.325 "r_mbytes_per_sec": 0, 00:17:18.325 "w_mbytes_per_sec": 0 00:17:18.325 }, 00:17:18.325 "claimed": true, 00:17:18.325 "claim_type": "exclusive_write", 00:17:18.325 "zoned": false, 00:17:18.325 "supported_io_types": { 00:17:18.325 "read": true, 00:17:18.325 "write": true, 00:17:18.325 "unmap": true, 00:17:18.325 "write_zeroes": true, 00:17:18.325 "flush": true, 00:17:18.325 "reset": true, 00:17:18.325 "compare": false, 00:17:18.325 "compare_and_write": false, 00:17:18.325 "abort": true, 00:17:18.325 "nvme_admin": false, 00:17:18.325 "nvme_io": false 00:17:18.325 }, 00:17:18.325 "memory_domains": [ 00:17:18.325 { 00:17:18.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.325 "dma_device_type": 2 00:17:18.325 } 00:17:18.325 ], 00:17:18.325 "driver_specific": {} 00:17:18.325 } 00:17:18.325 ] 00:17:18.325 12:01:23 -- common/autotest_common.sh@905 -- # return 0 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.325 12:01:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.583 12:01:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.583 "name": "Existed_Raid", 00:17:18.583 "uuid": "a6e8e792-6e88-4eb3-b550-2e84445b4f2d", 00:17:18.583 "strip_size_kb": 64, 00:17:18.583 "state": "online", 00:17:18.583 "raid_level": "raid0", 00:17:18.583 "superblock": true, 00:17:18.583 "num_base_bdevs": 2, 00:17:18.583 "num_base_bdevs_discovered": 2, 00:17:18.584 "num_base_bdevs_operational": 2, 00:17:18.584 "base_bdevs_list": [ 00:17:18.584 { 00:17:18.584 "name": "BaseBdev1", 00:17:18.584 "uuid": "bd43f447-660e-4a88-873c-85523f604eef", 00:17:18.584 "is_configured": true, 00:17:18.584 "data_offset": 2048, 00:17:18.584 "data_size": 63488 00:17:18.584 }, 00:17:18.584 { 00:17:18.584 "name": "BaseBdev2", 00:17:18.584 "uuid": "af5ddbf2-3fba-4076-a23c-04463029633d", 00:17:18.584 "is_configured": true, 00:17:18.584 "data_offset": 2048, 00:17:18.584 "data_size": 63488 00:17:18.584 } 00:17:18.584 ] 00:17:18.584 }' 00:17:18.584 12:01:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.584 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:19.151 12:01:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:19.415 [2024-11-29 12:01:24.846921] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:19.415 [2024-11-29 12:01:24.846973] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.415 [2024-11-29 12:01:24.847066] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.415 12:01:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.416 12:01:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.416 12:01:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.416 12:01:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.673 12:01:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.673 "name": "Existed_Raid", 00:17:19.673 "uuid": "a6e8e792-6e88-4eb3-b550-2e84445b4f2d", 00:17:19.673 "strip_size_kb": 64, 00:17:19.673 "state": "offline", 00:17:19.673 "raid_level": "raid0", 00:17:19.673 "superblock": true, 00:17:19.673 "num_base_bdevs": 2, 00:17:19.673 "num_base_bdevs_discovered": 1, 00:17:19.673 "num_base_bdevs_operational": 1, 00:17:19.673 "base_bdevs_list": [ 00:17:19.673 { 00:17:19.673 "name": null, 00:17:19.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.673 "is_configured": false, 00:17:19.673 "data_offset": 2048, 00:17:19.673 "data_size": 63488 00:17:19.673 }, 00:17:19.673 { 00:17:19.673 "name": "BaseBdev2", 00:17:19.673 "uuid": "af5ddbf2-3fba-4076-a23c-04463029633d", 00:17:19.673 "is_configured": true, 00:17:19.673 "data_offset": 2048, 00:17:19.673 "data_size": 63488 00:17:19.673 } 00:17:19.673 ] 00:17:19.673 }' 00:17:19.673 12:01:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.673 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:17:20.607 12:01:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:20.607 12:01:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.607 12:01:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.607 12:01:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:20.865 [2024-11-29 12:01:26.322907] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.865 [2024-11-29 12:01:26.323014] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.865 12:01:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.123 12:01:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:21.123 12:01:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:21.123 12:01:26 -- bdev/bdev_raid.sh@287 -- # killprocess 123676 00:17:21.123 12:01:26 -- common/autotest_common.sh@936 -- # '[' -z 123676 ']' 00:17:21.123 12:01:26 -- common/autotest_common.sh@940 -- # kill -0 123676 00:17:21.123 12:01:26 -- common/autotest_common.sh@941 -- # uname 00:17:21.123 12:01:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.123 12:01:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123676 00:17:21.381 killing process with pid 123676 00:17:21.381 12:01:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:21.381 12:01:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:21.381 12:01:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123676' 00:17:21.381 12:01:26 -- common/autotest_common.sh@955 -- # kill 123676 00:17:21.381 12:01:26 -- common/autotest_common.sh@960 -- # wait 123676 00:17:21.381 [2024-11-29 12:01:26.647699] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.381 [2024-11-29 12:01:26.648106] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.640 ************************************ 00:17:21.640 END TEST raid_state_function_test_sb 00:17:21.640 ************************************ 00:17:21.640 12:01:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:21.640 00:17:21.640 real 0m10.606s 00:17:21.640 user 0m19.116s 00:17:21.640 sys 0m1.446s 00:17:21.640 12:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:21.640 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:17:21.640 12:01:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:21.640 12:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.640 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.640 ************************************ 00:17:21.640 START TEST raid_superblock_test 00:17:21.640 ************************************ 00:17:21.640 12:01:27 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:21.640 12:01:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:21.641 12:01:27 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:21.641 12:01:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:21.641 12:01:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:21.641 12:01:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=124001 00:17:21.641 12:01:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:21.641 12:01:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124001 /var/tmp/spdk-raid.sock 00:17:21.641 12:01:27 -- common/autotest_common.sh@829 -- # '[' -z 124001 ']' 00:17:21.641 12:01:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:21.641 12:01:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.641 12:01:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:21.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:21.641 12:01:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.641 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.641 [2024-11-29 12:01:27.079175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:21.641 [2024-11-29 12:01:27.079405] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124001 ] 00:17:21.899 [2024-11-29 12:01:27.227511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.899 [2024-11-29 12:01:27.319205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.899 [2024-11-29 12:01:27.375001] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.833 12:01:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.833 12:01:28 -- common/autotest_common.sh@862 -- # return 0 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:22.833 malloc1 00:17:22.833 12:01:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.093 [2024-11-29 12:01:28.531813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.093 [2024-11-29 12:01:28.531964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.093 [2024-11-29 12:01:28.532025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:23.093 [2024-11-29 12:01:28.532098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.093 [2024-11-29 12:01:28.535285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.093 [2024-11-29 12:01:28.535372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.093 pt1 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.093 12:01:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:23.413 malloc2 00:17:23.413 12:01:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.699 [2024-11-29 12:01:29.042677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.699 [2024-11-29 12:01:29.042830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.699 [2024-11-29 12:01:29.042883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:23.699 [2024-11-29 12:01:29.042940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.699 [2024-11-29 12:01:29.045855] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.699 [2024-11-29 12:01:29.045917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.699 pt2 00:17:23.699 12:01:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:23.699 12:01:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:23.699 12:01:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:23.957 [2024-11-29 12:01:29.266867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.957 [2024-11-29 12:01:29.269450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.957 [2024-11-29 12:01:29.269714] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:17:23.957 [2024-11-29 12:01:29.269732] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:23.957 [2024-11-29 12:01:29.269911] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:23.957 [2024-11-29 12:01:29.270427] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:17:23.957 [2024-11-29 12:01:29.270450] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:17:23.957 [2024-11-29 12:01:29.270692] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.957 12:01:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.215 12:01:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.215 "name": "raid_bdev1", 00:17:24.215 "uuid": "305cbf9b-048d-4546-884b-7adfc4dfb467", 00:17:24.215 "strip_size_kb": 64, 00:17:24.216 "state": "online", 00:17:24.216 "raid_level": "raid0", 00:17:24.216 "superblock": true, 00:17:24.216 "num_base_bdevs": 2, 00:17:24.216 "num_base_bdevs_discovered": 2, 00:17:24.216 "num_base_bdevs_operational": 2, 00:17:24.216 "base_bdevs_list": [ 00:17:24.216 { 00:17:24.216 "name": "pt1", 00:17:24.216 "uuid": "6b08ba99-77bf-55b5-83ea-fe3628e0a381", 00:17:24.216 "is_configured": true, 00:17:24.216 "data_offset": 2048, 00:17:24.216 "data_size": 63488 00:17:24.216 }, 00:17:24.216 { 00:17:24.216 "name": "pt2", 00:17:24.216 "uuid": "0f82e940-87f9-5649-912b-03b63cfcd8c0", 00:17:24.216 "is_configured": true, 00:17:24.216 "data_offset": 2048, 00:17:24.216 "data_size": 63488 00:17:24.216 } 00:17:24.216 ] 00:17:24.216 }' 00:17:24.216 12:01:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.216 12:01:29 -- common/autotest_common.sh@10 -- # set +x 00:17:24.781 12:01:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:24.781 12:01:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:25.039 [2024-11-29 12:01:30.451374] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.039 12:01:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=305cbf9b-048d-4546-884b-7adfc4dfb467 00:17:25.039 12:01:30 -- bdev/bdev_raid.sh@380 -- # '[' -z 305cbf9b-048d-4546-884b-7adfc4dfb467 ']' 00:17:25.039 12:01:30 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:25.297 [2024-11-29 12:01:30.687215] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.297 [2024-11-29 12:01:30.687273] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.297 [2024-11-29 12:01:30.687406] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.297 [2024-11-29 12:01:30.687494] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.297 [2024-11-29 12:01:30.687511] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:17:25.297 12:01:30 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.297 12:01:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:25.556 12:01:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:25.556 12:01:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:25.556 12:01:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:25.556 12:01:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:25.814 12:01:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:25.814 12:01:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:26.072 12:01:31 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:26.072 12:01:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:26.331 12:01:31 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:26.331 12:01:31 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:17:26.331 12:01:31 -- common/autotest_common.sh@650 -- # local es=0 00:17:26.331 12:01:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:17:26.331 12:01:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.331 12:01:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.331 12:01:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.331 12:01:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.331 12:01:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.331 12:01:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.331 12:01:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.331 12:01:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:26.331 12:01:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:17:26.589 [2024-11-29 12:01:32.007476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:26.589 [2024-11-29 12:01:32.010031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:26.589 [2024-11-29 12:01:32.010118] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:26.589 [2024-11-29 12:01:32.010255] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:26.589 [2024-11-29 12:01:32.010307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.589 [2024-11-29 12:01:32.010321] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:17:26.589 request: 00:17:26.589 { 00:17:26.589 "name": "raid_bdev1", 00:17:26.589 "raid_level": "raid0", 00:17:26.589 "base_bdevs": [ 00:17:26.589 "malloc1", 00:17:26.589 "malloc2" 00:17:26.589 ], 00:17:26.589 "superblock": false, 00:17:26.589 "strip_size_kb": 64, 00:17:26.589 "method": "bdev_raid_create", 00:17:26.589 "req_id": 1 00:17:26.589 } 00:17:26.589 Got JSON-RPC error response 00:17:26.589 response: 00:17:26.589 { 00:17:26.589 "code": -17, 00:17:26.589 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:26.589 } 00:17:26.589 12:01:32 -- common/autotest_common.sh@653 -- # es=1 00:17:26.589 12:01:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:26.589 12:01:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:26.589 12:01:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:26.589 12:01:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.589 12:01:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:26.847 12:01:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:26.847 12:01:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:26.847 12:01:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.106 [2024-11-29 12:01:32.479452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.106 [2024-11-29 12:01:32.479616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.106 [2024-11-29 12:01:32.479688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:27.106 [2024-11-29 12:01:32.479725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.106 [2024-11-29 12:01:32.482675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.106 [2024-11-29 12:01:32.482737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.106 [2024-11-29 12:01:32.482870] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:27.106 [2024-11-29 12:01:32.482952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.106 pt1 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.106 12:01:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.364 12:01:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.364 "name": "raid_bdev1", 00:17:27.364 "uuid": "305cbf9b-048d-4546-884b-7adfc4dfb467", 00:17:27.364 "strip_size_kb": 64, 00:17:27.364 "state": "configuring", 00:17:27.364 "raid_level": "raid0", 00:17:27.364 "superblock": true, 00:17:27.364 "num_base_bdevs": 2, 00:17:27.364 "num_base_bdevs_discovered": 1, 00:17:27.364 "num_base_bdevs_operational": 2, 00:17:27.364 "base_bdevs_list": [ 00:17:27.364 { 00:17:27.364 "name": "pt1", 00:17:27.364 "uuid": "6b08ba99-77bf-55b5-83ea-fe3628e0a381", 00:17:27.364 "is_configured": true, 00:17:27.364 "data_offset": 2048, 00:17:27.364 "data_size": 63488 00:17:27.364 }, 00:17:27.364 { 00:17:27.364 "name": null, 00:17:27.364 "uuid": "0f82e940-87f9-5649-912b-03b63cfcd8c0", 00:17:27.364 "is_configured": false, 00:17:27.364 "data_offset": 2048, 00:17:27.364 "data_size": 63488 00:17:27.364 } 00:17:27.364 ] 00:17:27.364 }' 00:17:27.364 12:01:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.364 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:17:27.929 12:01:33 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:17:27.929 12:01:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:27.929 12:01:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:27.929 12:01:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.188 [2024-11-29 12:01:33.623755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.188 [2024-11-29 12:01:33.623915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.188 [2024-11-29 12:01:33.623965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:28.188 [2024-11-29 12:01:33.623998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.188 [2024-11-29 12:01:33.624597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.188 [2024-11-29 12:01:33.624650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.188 [2024-11-29 12:01:33.624761] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:28.188 [2024-11-29 12:01:33.624799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.188 [2024-11-29 12:01:33.624943] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:17:28.188 [2024-11-29 12:01:33.624969] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:28.188 [2024-11-29 12:01:33.625065] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:28.188 [2024-11-29 12:01:33.625423] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:17:28.188 [2024-11-29 12:01:33.625447] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:17:28.188 [2024-11-29 12:01:33.625575] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.188 pt2 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.188 12:01:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.754 12:01:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.754 "name": "raid_bdev1", 00:17:28.754 "uuid": "305cbf9b-048d-4546-884b-7adfc4dfb467", 00:17:28.754 "strip_size_kb": 64, 00:17:28.754 "state": "online", 00:17:28.754 "raid_level": "raid0", 00:17:28.754 "superblock": true, 00:17:28.754 "num_base_bdevs": 2, 00:17:28.754 "num_base_bdevs_discovered": 2, 00:17:28.754 "num_base_bdevs_operational": 2, 00:17:28.754 "base_bdevs_list": [ 00:17:28.754 { 00:17:28.754 "name": "pt1", 00:17:28.754 "uuid": "6b08ba99-77bf-55b5-83ea-fe3628e0a381", 00:17:28.754 "is_configured": true, 00:17:28.754 "data_offset": 2048, 00:17:28.754 "data_size": 63488 00:17:28.754 }, 00:17:28.754 { 00:17:28.754 "name": "pt2", 00:17:28.754 "uuid": "0f82e940-87f9-5649-912b-03b63cfcd8c0", 00:17:28.754 "is_configured": true, 00:17:28.754 "data_offset": 2048, 00:17:28.754 "data_size": 63488 00:17:28.754 } 00:17:28.754 ] 00:17:28.754 }' 00:17:28.754 12:01:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.754 12:01:33 -- common/autotest_common.sh@10 -- # set +x 00:17:29.320 12:01:34 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:29.320 12:01:34 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:29.579 [2024-11-29 12:01:34.880782] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.579 12:01:34 -- bdev/bdev_raid.sh@430 -- # '[' 305cbf9b-048d-4546-884b-7adfc4dfb467 '!=' 305cbf9b-048d-4546-884b-7adfc4dfb467 ']' 00:17:29.579 12:01:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:29.579 12:01:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:29.579 12:01:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:29.579 12:01:34 -- bdev/bdev_raid.sh@511 -- # killprocess 124001 00:17:29.579 12:01:34 -- common/autotest_common.sh@936 -- # '[' -z 124001 ']' 00:17:29.579 12:01:34 -- common/autotest_common.sh@940 -- # kill -0 124001 00:17:29.579 12:01:34 -- common/autotest_common.sh@941 -- # uname 00:17:29.579 12:01:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.579 12:01:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124001 00:17:29.579 killing process with pid 124001 00:17:29.579 12:01:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:29.579 12:01:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:29.579 12:01:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124001' 00:17:29.579 12:01:34 -- common/autotest_common.sh@955 -- # kill 124001 00:17:29.579 12:01:34 -- common/autotest_common.sh@960 -- # wait 124001 00:17:29.579 [2024-11-29 12:01:34.925603] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.579 [2024-11-29 12:01:34.925714] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.579 [2024-11-29 12:01:34.925811] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.579 [2024-11-29 12:01:34.925824] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:17:29.579 [2024-11-29 12:01:34.952063] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.837 ************************************ 00:17:29.837 END TEST raid_superblock_test 00:17:29.837 ************************************ 00:17:29.837 12:01:35 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:29.837 00:17:29.837 real 0m8.251s 00:17:29.837 user 0m14.833s 00:17:29.837 sys 0m1.074s 00:17:29.837 12:01:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:29.837 12:01:35 -- common/autotest_common.sh@10 -- # set +x 00:17:29.837 12:01:35 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:29.837 12:01:35 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:17:29.837 12:01:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:29.837 12:01:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:29.837 12:01:35 -- common/autotest_common.sh@10 -- # set +x 00:17:29.837 ************************************ 00:17:29.837 START TEST raid_state_function_test 00:17:29.837 ************************************ 00:17:29.837 12:01:35 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:17:29.837 12:01:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=124252 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124252' 00:17:29.838 Process raid pid: 124252 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124252 /var/tmp/spdk-raid.sock 00:17:29.838 12:01:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:29.838 12:01:35 -- common/autotest_common.sh@829 -- # '[' -z 124252 ']' 00:17:29.838 12:01:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.838 12:01:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.838 12:01:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.838 12:01:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.838 12:01:35 -- common/autotest_common.sh@10 -- # set +x 00:17:30.097 [2024-11-29 12:01:35.389582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:30.097 [2024-11-29 12:01:35.389808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.097 [2024-11-29 12:01:35.541133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.356 [2024-11-29 12:01:35.638401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.356 [2024-11-29 12:01:35.698018] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.922 12:01:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.922 12:01:36 -- common/autotest_common.sh@862 -- # return 0 00:17:30.922 12:01:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:31.180 [2024-11-29 12:01:36.618901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.180 [2024-11-29 12:01:36.619004] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.180 [2024-11-29 12:01:36.619026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.180 [2024-11-29 12:01:36.619045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.180 12:01:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.437 12:01:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.437 "name": "Existed_Raid", 00:17:31.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.437 "strip_size_kb": 64, 00:17:31.437 "state": "configuring", 00:17:31.437 "raid_level": "concat", 00:17:31.437 "superblock": false, 00:17:31.437 "num_base_bdevs": 2, 00:17:31.437 "num_base_bdevs_discovered": 0, 00:17:31.437 "num_base_bdevs_operational": 2, 00:17:31.437 "base_bdevs_list": [ 00:17:31.437 { 00:17:31.437 "name": "BaseBdev1", 00:17:31.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.437 "is_configured": false, 00:17:31.437 "data_offset": 0, 00:17:31.437 "data_size": 0 00:17:31.437 }, 00:17:31.437 { 00:17:31.437 "name": "BaseBdev2", 00:17:31.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.438 "is_configured": false, 00:17:31.438 "data_offset": 0, 00:17:31.438 "data_size": 0 00:17:31.438 } 00:17:31.438 ] 00:17:31.438 }' 00:17:31.438 12:01:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.438 12:01:36 -- common/autotest_common.sh@10 -- # set +x 00:17:32.371 12:01:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:32.371 [2024-11-29 12:01:37.738952] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.371 [2024-11-29 12:01:37.739016] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:32.371 12:01:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:32.629 [2024-11-29 12:01:38.051040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.629 [2024-11-29 12:01:38.051141] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.629 [2024-11-29 12:01:38.051156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.629 [2024-11-29 12:01:38.051183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.629 12:01:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:32.886 [2024-11-29 12:01:38.314653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.886 BaseBdev1 00:17:32.886 12:01:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:32.886 12:01:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:32.886 12:01:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:32.886 12:01:38 -- common/autotest_common.sh@899 -- # local i 00:17:32.886 12:01:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:32.886 12:01:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:32.886 12:01:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.144 12:01:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:33.401 [ 00:17:33.401 { 00:17:33.401 "name": "BaseBdev1", 00:17:33.401 "aliases": [ 00:17:33.401 "55d15ace-8861-43eb-a868-21693f9cb797" 00:17:33.401 ], 00:17:33.401 "product_name": "Malloc disk", 00:17:33.401 "block_size": 512, 00:17:33.401 "num_blocks": 65536, 00:17:33.401 "uuid": "55d15ace-8861-43eb-a868-21693f9cb797", 00:17:33.401 "assigned_rate_limits": { 00:17:33.401 "rw_ios_per_sec": 0, 00:17:33.401 "rw_mbytes_per_sec": 0, 00:17:33.401 "r_mbytes_per_sec": 0, 00:17:33.401 "w_mbytes_per_sec": 0 00:17:33.401 }, 00:17:33.401 "claimed": true, 00:17:33.401 "claim_type": "exclusive_write", 00:17:33.401 "zoned": false, 00:17:33.401 "supported_io_types": { 00:17:33.401 "read": true, 00:17:33.401 "write": true, 00:17:33.401 "unmap": true, 00:17:33.401 "write_zeroes": true, 00:17:33.401 "flush": true, 00:17:33.401 "reset": true, 00:17:33.401 "compare": false, 00:17:33.401 "compare_and_write": false, 00:17:33.401 "abort": true, 00:17:33.401 "nvme_admin": false, 00:17:33.401 "nvme_io": false 00:17:33.401 }, 00:17:33.401 "memory_domains": [ 00:17:33.401 { 00:17:33.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.401 "dma_device_type": 2 00:17:33.401 } 00:17:33.401 ], 00:17:33.401 "driver_specific": {} 00:17:33.401 } 00:17:33.401 ] 00:17:33.401 12:01:38 -- common/autotest_common.sh@905 -- # return 0 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.401 12:01:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.659 12:01:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.659 "name": "Existed_Raid", 00:17:33.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.659 "strip_size_kb": 64, 00:17:33.659 "state": "configuring", 00:17:33.659 "raid_level": "concat", 00:17:33.659 "superblock": false, 00:17:33.659 "num_base_bdevs": 2, 00:17:33.659 "num_base_bdevs_discovered": 1, 00:17:33.659 "num_base_bdevs_operational": 2, 00:17:33.659 "base_bdevs_list": [ 00:17:33.659 { 00:17:33.659 "name": "BaseBdev1", 00:17:33.659 "uuid": "55d15ace-8861-43eb-a868-21693f9cb797", 00:17:33.659 "is_configured": true, 00:17:33.659 "data_offset": 0, 00:17:33.659 "data_size": 65536 00:17:33.659 }, 00:17:33.659 { 00:17:33.659 "name": "BaseBdev2", 00:17:33.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.659 "is_configured": false, 00:17:33.659 "data_offset": 0, 00:17:33.659 "data_size": 0 00:17:33.659 } 00:17:33.659 ] 00:17:33.659 }' 00:17:33.659 12:01:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.659 12:01:39 -- common/autotest_common.sh@10 -- # set +x 00:17:34.226 12:01:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:34.793 [2024-11-29 12:01:40.003999] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.793 [2024-11-29 12:01:40.004114] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:34.793 [2024-11-29 12:01:40.280115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.793 [2024-11-29 12:01:40.282475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.793 [2024-11-29 12:01:40.282545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.793 12:01:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.369 12:01:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.369 "name": "Existed_Raid", 00:17:35.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.369 "strip_size_kb": 64, 00:17:35.369 "state": "configuring", 00:17:35.369 "raid_level": "concat", 00:17:35.369 "superblock": false, 00:17:35.369 "num_base_bdevs": 2, 00:17:35.369 "num_base_bdevs_discovered": 1, 00:17:35.369 "num_base_bdevs_operational": 2, 00:17:35.369 "base_bdevs_list": [ 00:17:35.369 { 00:17:35.369 "name": "BaseBdev1", 00:17:35.369 "uuid": "55d15ace-8861-43eb-a868-21693f9cb797", 00:17:35.369 "is_configured": true, 00:17:35.369 "data_offset": 0, 00:17:35.369 "data_size": 65536 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "name": "BaseBdev2", 00:17:35.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.369 "is_configured": false, 00:17:35.369 "data_offset": 0, 00:17:35.369 "data_size": 0 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }' 00:17:35.369 12:01:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.369 12:01:40 -- common/autotest_common.sh@10 -- # set +x 00:17:35.970 12:01:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.228 [2024-11-29 12:01:41.498570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.228 [2024-11-29 12:01:41.498666] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:36.228 [2024-11-29 12:01:41.498684] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:36.228 [2024-11-29 12:01:41.498919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:17:36.228 [2024-11-29 12:01:41.499529] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:36.228 [2024-11-29 12:01:41.499561] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:36.228 [2024-11-29 12:01:41.499931] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.228 BaseBdev2 00:17:36.228 12:01:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:36.228 12:01:41 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:36.228 12:01:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:36.228 12:01:41 -- common/autotest_common.sh@899 -- # local i 00:17:36.228 12:01:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:36.228 12:01:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:36.228 12:01:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.487 12:01:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.487 [ 00:17:36.487 { 00:17:36.487 "name": "BaseBdev2", 00:17:36.487 "aliases": [ 00:17:36.487 "e57df9cb-0a2e-4432-b69c-96d03ac65970" 00:17:36.488 ], 00:17:36.488 "product_name": "Malloc disk", 00:17:36.488 "block_size": 512, 00:17:36.488 "num_blocks": 65536, 00:17:36.488 "uuid": "e57df9cb-0a2e-4432-b69c-96d03ac65970", 00:17:36.488 "assigned_rate_limits": { 00:17:36.488 "rw_ios_per_sec": 0, 00:17:36.488 "rw_mbytes_per_sec": 0, 00:17:36.488 "r_mbytes_per_sec": 0, 00:17:36.488 "w_mbytes_per_sec": 0 00:17:36.488 }, 00:17:36.488 "claimed": true, 00:17:36.488 "claim_type": "exclusive_write", 00:17:36.488 "zoned": false, 00:17:36.488 "supported_io_types": { 00:17:36.488 "read": true, 00:17:36.488 "write": true, 00:17:36.488 "unmap": true, 00:17:36.488 "write_zeroes": true, 00:17:36.488 "flush": true, 00:17:36.488 "reset": true, 00:17:36.488 "compare": false, 00:17:36.488 "compare_and_write": false, 00:17:36.488 "abort": true, 00:17:36.488 "nvme_admin": false, 00:17:36.488 "nvme_io": false 00:17:36.488 }, 00:17:36.488 "memory_domains": [ 00:17:36.488 { 00:17:36.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.488 "dma_device_type": 2 00:17:36.488 } 00:17:36.488 ], 00:17:36.488 "driver_specific": {} 00:17:36.488 } 00:17:36.488 ] 00:17:36.488 12:01:41 -- common/autotest_common.sh@905 -- # return 0 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.488 12:01:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.746 12:01:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.746 "name": "Existed_Raid", 00:17:36.746 "uuid": "07688336-91d2-41dd-a6be-0276e3850ae5", 00:17:36.746 "strip_size_kb": 64, 00:17:36.746 "state": "online", 00:17:36.746 "raid_level": "concat", 00:17:36.746 "superblock": false, 00:17:36.746 "num_base_bdevs": 2, 00:17:36.746 "num_base_bdevs_discovered": 2, 00:17:36.746 "num_base_bdevs_operational": 2, 00:17:36.746 "base_bdevs_list": [ 00:17:36.746 { 00:17:36.746 "name": "BaseBdev1", 00:17:36.746 "uuid": "55d15ace-8861-43eb-a868-21693f9cb797", 00:17:36.746 "is_configured": true, 00:17:36.746 "data_offset": 0, 00:17:36.746 "data_size": 65536 00:17:36.746 }, 00:17:36.746 { 00:17:36.746 "name": "BaseBdev2", 00:17:36.746 "uuid": "e57df9cb-0a2e-4432-b69c-96d03ac65970", 00:17:36.746 "is_configured": true, 00:17:36.746 "data_offset": 0, 00:17:36.746 "data_size": 65536 00:17:36.746 } 00:17:36.746 ] 00:17:36.746 }' 00:17:36.746 12:01:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.746 12:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:37.682 12:01:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:37.682 [2024-11-29 12:01:43.091166] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.682 [2024-11-29 12:01:43.091220] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.682 [2024-11-29 12:01:43.091337] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.682 12:01:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.940 12:01:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.940 "name": "Existed_Raid", 00:17:37.940 "uuid": "07688336-91d2-41dd-a6be-0276e3850ae5", 00:17:37.940 "strip_size_kb": 64, 00:17:37.940 "state": "offline", 00:17:37.940 "raid_level": "concat", 00:17:37.940 "superblock": false, 00:17:37.940 "num_base_bdevs": 2, 00:17:37.940 "num_base_bdevs_discovered": 1, 00:17:37.940 "num_base_bdevs_operational": 1, 00:17:37.940 "base_bdevs_list": [ 00:17:37.940 { 00:17:37.940 "name": null, 00:17:37.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.940 "is_configured": false, 00:17:37.940 "data_offset": 0, 00:17:37.940 "data_size": 65536 00:17:37.940 }, 00:17:37.940 { 00:17:37.940 "name": "BaseBdev2", 00:17:37.940 "uuid": "e57df9cb-0a2e-4432-b69c-96d03ac65970", 00:17:37.940 "is_configured": true, 00:17:37.940 "data_offset": 0, 00:17:37.940 "data_size": 65536 00:17:37.940 } 00:17:37.940 ] 00:17:37.940 }' 00:17:37.940 12:01:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.940 12:01:43 -- common/autotest_common.sh@10 -- # set +x 00:17:38.507 12:01:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:38.507 12:01:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:38.507 12:01:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:38.507 12:01:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:39.074 [2024-11-29 12:01:44.503097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.074 [2024-11-29 12:01:44.503191] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.074 12:01:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.331 12:01:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:39.331 12:01:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:39.331 12:01:44 -- bdev/bdev_raid.sh@287 -- # killprocess 124252 00:17:39.331 12:01:44 -- common/autotest_common.sh@936 -- # '[' -z 124252 ']' 00:17:39.332 12:01:44 -- common/autotest_common.sh@940 -- # kill -0 124252 00:17:39.332 12:01:44 -- common/autotest_common.sh@941 -- # uname 00:17:39.332 12:01:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.332 12:01:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124252 00:17:39.332 12:01:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:39.332 12:01:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:39.332 killing process with pid 124252 00:17:39.332 12:01:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124252' 00:17:39.332 12:01:44 -- common/autotest_common.sh@955 -- # kill 124252 00:17:39.332 [2024-11-29 12:01:44.842235] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.332 12:01:44 -- common/autotest_common.sh@960 -- # wait 124252 00:17:39.332 [2024-11-29 12:01:44.842337] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.589 12:01:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:39.589 00:17:39.589 real 0m9.757s 00:17:39.589 user 0m17.796s 00:17:39.589 sys 0m1.273s 00:17:39.589 12:01:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:39.589 ************************************ 00:17:39.589 END TEST raid_state_function_test 00:17:39.589 ************************************ 00:17:39.589 12:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:17:39.847 12:01:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:39.847 12:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.847 12:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.847 ************************************ 00:17:39.847 START TEST raid_state_function_test_sb 00:17:39.847 ************************************ 00:17:39.847 12:01:45 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:39.847 12:01:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=124573 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124573' 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:39.848 Process raid pid: 124573 00:17:39.848 12:01:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124573 /var/tmp/spdk-raid.sock 00:17:39.848 12:01:45 -- common/autotest_common.sh@829 -- # '[' -z 124573 ']' 00:17:39.848 12:01:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:39.848 12:01:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:39.848 12:01:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:39.848 12:01:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.848 12:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.848 [2024-11-29 12:01:45.194983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:39.848 [2024-11-29 12:01:45.195166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.848 [2024-11-29 12:01:45.335371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.106 [2024-11-29 12:01:45.428944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.106 [2024-11-29 12:01:45.483232] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.672 12:01:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.672 12:01:46 -- common/autotest_common.sh@862 -- # return 0 00:17:40.672 12:01:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:40.981 [2024-11-29 12:01:46.363366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.981 [2024-11-29 12:01:46.363480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.981 [2024-11-29 12:01:46.363496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.981 [2024-11-29 12:01:46.363517] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.981 12:01:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.243 12:01:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.243 "name": "Existed_Raid", 00:17:41.244 "uuid": "30a79115-ef3c-4ad0-9222-b356bfd5eff5", 00:17:41.244 "strip_size_kb": 64, 00:17:41.244 "state": "configuring", 00:17:41.244 "raid_level": "concat", 00:17:41.244 "superblock": true, 00:17:41.244 "num_base_bdevs": 2, 00:17:41.244 "num_base_bdevs_discovered": 0, 00:17:41.244 "num_base_bdevs_operational": 2, 00:17:41.244 "base_bdevs_list": [ 00:17:41.244 { 00:17:41.244 "name": "BaseBdev1", 00:17:41.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.244 "is_configured": false, 00:17:41.244 "data_offset": 0, 00:17:41.244 "data_size": 0 00:17:41.244 }, 00:17:41.244 { 00:17:41.244 "name": "BaseBdev2", 00:17:41.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.244 "is_configured": false, 00:17:41.244 "data_offset": 0, 00:17:41.244 "data_size": 0 00:17:41.244 } 00:17:41.244 ] 00:17:41.244 }' 00:17:41.244 12:01:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.244 12:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:42.186 12:01:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.186 [2024-11-29 12:01:47.599495] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.186 [2024-11-29 12:01:47.599600] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:42.186 12:01:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:42.444 [2024-11-29 12:01:47.827584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.444 [2024-11-29 12:01:47.827689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.444 [2024-11-29 12:01:47.827703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.444 [2024-11-29 12:01:47.827727] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.444 12:01:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.702 [2024-11-29 12:01:48.080081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.702 BaseBdev1 00:17:42.702 12:01:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:42.702 12:01:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:42.702 12:01:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.702 12:01:48 -- common/autotest_common.sh@899 -- # local i 00:17:42.702 12:01:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.702 12:01:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.702 12:01:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.960 12:01:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.218 [ 00:17:43.218 { 00:17:43.218 "name": "BaseBdev1", 00:17:43.218 "aliases": [ 00:17:43.218 "df1cbe7a-9784-4a46-9c97-68222115509b" 00:17:43.218 ], 00:17:43.218 "product_name": "Malloc disk", 00:17:43.218 "block_size": 512, 00:17:43.218 "num_blocks": 65536, 00:17:43.218 "uuid": "df1cbe7a-9784-4a46-9c97-68222115509b", 00:17:43.218 "assigned_rate_limits": { 00:17:43.218 "rw_ios_per_sec": 0, 00:17:43.218 "rw_mbytes_per_sec": 0, 00:17:43.218 "r_mbytes_per_sec": 0, 00:17:43.218 "w_mbytes_per_sec": 0 00:17:43.218 }, 00:17:43.218 "claimed": true, 00:17:43.218 "claim_type": "exclusive_write", 00:17:43.218 "zoned": false, 00:17:43.218 "supported_io_types": { 00:17:43.218 "read": true, 00:17:43.218 "write": true, 00:17:43.218 "unmap": true, 00:17:43.218 "write_zeroes": true, 00:17:43.218 "flush": true, 00:17:43.218 "reset": true, 00:17:43.218 "compare": false, 00:17:43.218 "compare_and_write": false, 00:17:43.218 "abort": true, 00:17:43.218 "nvme_admin": false, 00:17:43.218 "nvme_io": false 00:17:43.218 }, 00:17:43.218 "memory_domains": [ 00:17:43.218 { 00:17:43.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.218 "dma_device_type": 2 00:17:43.218 } 00:17:43.218 ], 00:17:43.218 "driver_specific": {} 00:17:43.218 } 00:17:43.218 ] 00:17:43.218 12:01:48 -- common/autotest_common.sh@905 -- # return 0 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.218 12:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.476 12:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.476 "name": "Existed_Raid", 00:17:43.476 "uuid": "c6e2b292-b37b-49c3-a57d-f682f91ed316", 00:17:43.476 "strip_size_kb": 64, 00:17:43.476 "state": "configuring", 00:17:43.476 "raid_level": "concat", 00:17:43.476 "superblock": true, 00:17:43.476 "num_base_bdevs": 2, 00:17:43.476 "num_base_bdevs_discovered": 1, 00:17:43.476 "num_base_bdevs_operational": 2, 00:17:43.476 "base_bdevs_list": [ 00:17:43.476 { 00:17:43.477 "name": "BaseBdev1", 00:17:43.477 "uuid": "df1cbe7a-9784-4a46-9c97-68222115509b", 00:17:43.477 "is_configured": true, 00:17:43.477 "data_offset": 2048, 00:17:43.477 "data_size": 63488 00:17:43.477 }, 00:17:43.477 { 00:17:43.477 "name": "BaseBdev2", 00:17:43.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.477 "is_configured": false, 00:17:43.477 "data_offset": 0, 00:17:43.477 "data_size": 0 00:17:43.477 } 00:17:43.477 ] 00:17:43.477 }' 00:17:43.477 12:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.477 12:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:44.044 12:01:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.303 [2024-11-29 12:01:49.652593] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.303 [2024-11-29 12:01:49.652670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:44.303 12:01:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:44.303 12:01:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:44.562 12:01:49 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:44.821 BaseBdev1 00:17:44.821 12:01:50 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:44.821 12:01:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:44.821 12:01:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:44.821 12:01:50 -- common/autotest_common.sh@899 -- # local i 00:17:44.821 12:01:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:44.821 12:01:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:44.821 12:01:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.080 12:01:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.339 [ 00:17:45.339 { 00:17:45.339 "name": "BaseBdev1", 00:17:45.339 "aliases": [ 00:17:45.339 "c5f6ea9e-aa59-4852-b256-b5ced0fd3920" 00:17:45.339 ], 00:17:45.339 "product_name": "Malloc disk", 00:17:45.339 "block_size": 512, 00:17:45.339 "num_blocks": 65536, 00:17:45.339 "uuid": "c5f6ea9e-aa59-4852-b256-b5ced0fd3920", 00:17:45.339 "assigned_rate_limits": { 00:17:45.339 "rw_ios_per_sec": 0, 00:17:45.339 "rw_mbytes_per_sec": 0, 00:17:45.339 "r_mbytes_per_sec": 0, 00:17:45.339 "w_mbytes_per_sec": 0 00:17:45.339 }, 00:17:45.339 "claimed": false, 00:17:45.339 "zoned": false, 00:17:45.339 "supported_io_types": { 00:17:45.339 "read": true, 00:17:45.339 "write": true, 00:17:45.339 "unmap": true, 00:17:45.339 "write_zeroes": true, 00:17:45.339 "flush": true, 00:17:45.339 "reset": true, 00:17:45.339 "compare": false, 00:17:45.339 "compare_and_write": false, 00:17:45.339 "abort": true, 00:17:45.339 "nvme_admin": false, 00:17:45.339 "nvme_io": false 00:17:45.339 }, 00:17:45.339 "memory_domains": [ 00:17:45.339 { 00:17:45.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.339 "dma_device_type": 2 00:17:45.339 } 00:17:45.339 ], 00:17:45.339 "driver_specific": {} 00:17:45.339 } 00:17:45.339 ] 00:17:45.339 12:01:50 -- common/autotest_common.sh@905 -- # return 0 00:17:45.339 12:01:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:45.598 [2024-11-29 12:01:50.988526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.598 [2024-11-29 12:01:50.990704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.598 [2024-11-29 12:01:50.990787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.598 12:01:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.857 12:01:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.857 "name": "Existed_Raid", 00:17:45.857 "uuid": "d6945416-4d1e-43ad-a459-d4e961cefebd", 00:17:45.857 "strip_size_kb": 64, 00:17:45.857 "state": "configuring", 00:17:45.857 "raid_level": "concat", 00:17:45.857 "superblock": true, 00:17:45.857 "num_base_bdevs": 2, 00:17:45.857 "num_base_bdevs_discovered": 1, 00:17:45.857 "num_base_bdevs_operational": 2, 00:17:45.857 "base_bdevs_list": [ 00:17:45.857 { 00:17:45.857 "name": "BaseBdev1", 00:17:45.857 "uuid": "c5f6ea9e-aa59-4852-b256-b5ced0fd3920", 00:17:45.857 "is_configured": true, 00:17:45.857 "data_offset": 2048, 00:17:45.857 "data_size": 63488 00:17:45.857 }, 00:17:45.857 { 00:17:45.857 "name": "BaseBdev2", 00:17:45.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.857 "is_configured": false, 00:17:45.857 "data_offset": 0, 00:17:45.857 "data_size": 0 00:17:45.857 } 00:17:45.857 ] 00:17:45.857 }' 00:17:45.857 12:01:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.857 12:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:46.424 12:01:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:46.685 [2024-11-29 12:01:52.142239] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.685 [2024-11-29 12:01:52.142517] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:17:46.685 [2024-11-29 12:01:52.142535] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:46.685 [2024-11-29 12:01:52.142720] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:46.685 [2024-11-29 12:01:52.143108] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:17:46.685 [2024-11-29 12:01:52.143139] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:17:46.685 [2024-11-29 12:01:52.143315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.685 BaseBdev2 00:17:46.685 12:01:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:46.685 12:01:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:46.685 12:01:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:46.685 12:01:52 -- common/autotest_common.sh@899 -- # local i 00:17:46.685 12:01:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:46.685 12:01:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:46.685 12:01:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.955 12:01:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:47.214 [ 00:17:47.214 { 00:17:47.214 "name": "BaseBdev2", 00:17:47.214 "aliases": [ 00:17:47.214 "ed091480-de05-426c-9530-143e1fb1171b" 00:17:47.214 ], 00:17:47.214 "product_name": "Malloc disk", 00:17:47.214 "block_size": 512, 00:17:47.214 "num_blocks": 65536, 00:17:47.214 "uuid": "ed091480-de05-426c-9530-143e1fb1171b", 00:17:47.214 "assigned_rate_limits": { 00:17:47.214 "rw_ios_per_sec": 0, 00:17:47.214 "rw_mbytes_per_sec": 0, 00:17:47.214 "r_mbytes_per_sec": 0, 00:17:47.214 "w_mbytes_per_sec": 0 00:17:47.214 }, 00:17:47.214 "claimed": true, 00:17:47.214 "claim_type": "exclusive_write", 00:17:47.214 "zoned": false, 00:17:47.214 "supported_io_types": { 00:17:47.214 "read": true, 00:17:47.214 "write": true, 00:17:47.214 "unmap": true, 00:17:47.214 "write_zeroes": true, 00:17:47.214 "flush": true, 00:17:47.214 "reset": true, 00:17:47.214 "compare": false, 00:17:47.214 "compare_and_write": false, 00:17:47.214 "abort": true, 00:17:47.214 "nvme_admin": false, 00:17:47.214 "nvme_io": false 00:17:47.214 }, 00:17:47.214 "memory_domains": [ 00:17:47.214 { 00:17:47.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.214 "dma_device_type": 2 00:17:47.214 } 00:17:47.214 ], 00:17:47.214 "driver_specific": {} 00:17:47.214 } 00:17:47.214 ] 00:17:47.214 12:01:52 -- common/autotest_common.sh@905 -- # return 0 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.214 12:01:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.472 12:01:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.472 "name": "Existed_Raid", 00:17:47.472 "uuid": "d6945416-4d1e-43ad-a459-d4e961cefebd", 00:17:47.472 "strip_size_kb": 64, 00:17:47.472 "state": "online", 00:17:47.472 "raid_level": "concat", 00:17:47.472 "superblock": true, 00:17:47.472 "num_base_bdevs": 2, 00:17:47.472 "num_base_bdevs_discovered": 2, 00:17:47.472 "num_base_bdevs_operational": 2, 00:17:47.472 "base_bdevs_list": [ 00:17:47.472 { 00:17:47.472 "name": "BaseBdev1", 00:17:47.472 "uuid": "c5f6ea9e-aa59-4852-b256-b5ced0fd3920", 00:17:47.472 "is_configured": true, 00:17:47.472 "data_offset": 2048, 00:17:47.472 "data_size": 63488 00:17:47.472 }, 00:17:47.472 { 00:17:47.472 "name": "BaseBdev2", 00:17:47.472 "uuid": "ed091480-de05-426c-9530-143e1fb1171b", 00:17:47.472 "is_configured": true, 00:17:47.472 "data_offset": 2048, 00:17:47.472 "data_size": 63488 00:17:47.472 } 00:17:47.472 ] 00:17:47.472 }' 00:17:47.472 12:01:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.472 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:48.039 12:01:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.298 [2024-11-29 12:01:53.738792] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.298 [2024-11-29 12:01:53.738837] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.298 [2024-11-29 12:01:53.738918] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.298 12:01:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.556 12:01:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.556 "name": "Existed_Raid", 00:17:48.556 "uuid": "d6945416-4d1e-43ad-a459-d4e961cefebd", 00:17:48.556 "strip_size_kb": 64, 00:17:48.556 "state": "offline", 00:17:48.556 "raid_level": "concat", 00:17:48.556 "superblock": true, 00:17:48.556 "num_base_bdevs": 2, 00:17:48.556 "num_base_bdevs_discovered": 1, 00:17:48.556 "num_base_bdevs_operational": 1, 00:17:48.556 "base_bdevs_list": [ 00:17:48.556 { 00:17:48.556 "name": null, 00:17:48.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.556 "is_configured": false, 00:17:48.556 "data_offset": 2048, 00:17:48.556 "data_size": 63488 00:17:48.556 }, 00:17:48.556 { 00:17:48.556 "name": "BaseBdev2", 00:17:48.556 "uuid": "ed091480-de05-426c-9530-143e1fb1171b", 00:17:48.556 "is_configured": true, 00:17:48.556 "data_offset": 2048, 00:17:48.556 "data_size": 63488 00:17:48.556 } 00:17:48.556 ] 00:17:48.556 }' 00:17:48.556 12:01:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.556 12:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.491 12:01:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:49.768 [2024-11-29 12:01:55.170806] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.768 [2024-11-29 12:01:55.170912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:49.768 12:01:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:49.768 12:01:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.768 12:01:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.768 12:01:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.028 12:01:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:50.028 12:01:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:50.028 12:01:55 -- bdev/bdev_raid.sh@287 -- # killprocess 124573 00:17:50.028 12:01:55 -- common/autotest_common.sh@936 -- # '[' -z 124573 ']' 00:17:50.028 12:01:55 -- common/autotest_common.sh@940 -- # kill -0 124573 00:17:50.028 12:01:55 -- common/autotest_common.sh@941 -- # uname 00:17:50.028 12:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.028 12:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124573 00:17:50.028 12:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.028 12:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.028 killing process with pid 124573 00:17:50.028 12:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124573' 00:17:50.028 12:01:55 -- common/autotest_common.sh@955 -- # kill 124573 00:17:50.028 [2024-11-29 12:01:55.457613] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.028 [2024-11-29 12:01:55.457739] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.028 12:01:55 -- common/autotest_common.sh@960 -- # wait 124573 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:50.286 00:17:50.286 real 0m10.559s 00:17:50.286 user 0m19.081s 00:17:50.286 sys 0m1.511s 00:17:50.286 12:01:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.286 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:50.286 ************************************ 00:17:50.286 END TEST raid_state_function_test_sb 00:17:50.286 ************************************ 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:50.286 12:01:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:50.286 12:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.286 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:50.286 ************************************ 00:17:50.286 START TEST raid_superblock_test 00:17:50.286 ************************************ 00:17:50.286 12:01:55 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@357 -- # raid_pid=124897 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124897 /var/tmp/spdk-raid.sock 00:17:50.286 12:01:55 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:50.286 12:01:55 -- common/autotest_common.sh@829 -- # '[' -z 124897 ']' 00:17:50.286 12:01:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:50.286 12:01:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.286 12:01:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:50.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:50.286 12:01:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.286 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:50.545 [2024-11-29 12:01:55.823364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:50.545 [2024-11-29 12:01:55.823616] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124897 ] 00:17:50.545 [2024-11-29 12:01:55.973799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.803 [2024-11-29 12:01:56.070122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.804 [2024-11-29 12:01:56.126877] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.371 12:01:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.371 12:01:56 -- common/autotest_common.sh@862 -- # return 0 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.371 12:01:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:51.629 malloc1 00:17:51.629 12:01:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.888 [2024-11-29 12:01:57.331092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.888 [2024-11-29 12:01:57.331244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.888 [2024-11-29 12:01:57.331294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:51.888 [2024-11-29 12:01:57.331364] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.888 [2024-11-29 12:01:57.334328] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.888 [2024-11-29 12:01:57.334434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.888 pt1 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.888 12:01:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:52.145 malloc2 00:17:52.145 12:01:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.403 [2024-11-29 12:01:57.858399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.403 [2024-11-29 12:01:57.858524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.403 [2024-11-29 12:01:57.858571] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:52.403 [2024-11-29 12:01:57.858619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.403 [2024-11-29 12:01:57.861204] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.403 [2024-11-29 12:01:57.861264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.403 pt2 00:17:52.403 12:01:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:52.403 12:01:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.403 12:01:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:52.661 [2024-11-29 12:01:58.102515] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.661 [2024-11-29 12:01:58.104920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.661 [2024-11-29 12:01:58.105186] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:17:52.661 [2024-11-29 12:01:58.105211] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:52.661 [2024-11-29 12:01:58.105380] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:52.661 [2024-11-29 12:01:58.105853] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:17:52.661 [2024-11-29 12:01:58.105877] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:17:52.661 [2024-11-29 12:01:58.106045] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.661 12:01:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.920 12:01:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.920 "name": "raid_bdev1", 00:17:52.920 "uuid": "334a04e7-cd1a-4906-baab-f65e22de5f12", 00:17:52.920 "strip_size_kb": 64, 00:17:52.920 "state": "online", 00:17:52.920 "raid_level": "concat", 00:17:52.920 "superblock": true, 00:17:52.920 "num_base_bdevs": 2, 00:17:52.920 "num_base_bdevs_discovered": 2, 00:17:52.920 "num_base_bdevs_operational": 2, 00:17:52.920 "base_bdevs_list": [ 00:17:52.920 { 00:17:52.920 "name": "pt1", 00:17:52.920 "uuid": "3f087b01-7d03-5257-9b98-478ef2bcaca8", 00:17:52.920 "is_configured": true, 00:17:52.920 "data_offset": 2048, 00:17:52.920 "data_size": 63488 00:17:52.920 }, 00:17:52.920 { 00:17:52.920 "name": "pt2", 00:17:52.920 "uuid": "319c6d17-c0dc-54bf-bfb3-a34f8f840f1b", 00:17:52.920 "is_configured": true, 00:17:52.920 "data_offset": 2048, 00:17:52.920 "data_size": 63488 00:17:52.920 } 00:17:52.920 ] 00:17:52.920 }' 00:17:52.920 12:01:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.920 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:53.510 12:01:59 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:53.510 12:01:59 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:53.769 [2024-11-29 12:01:59.267052] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.028 12:01:59 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=334a04e7-cd1a-4906-baab-f65e22de5f12 00:17:54.028 12:01:59 -- bdev/bdev_raid.sh@380 -- # '[' -z 334a04e7-cd1a-4906-baab-f65e22de5f12 ']' 00:17:54.028 12:01:59 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:54.028 [2024-11-29 12:01:59.538914] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.028 [2024-11-29 12:01:59.538965] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.028 [2024-11-29 12:01:59.539099] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.028 [2024-11-29 12:01:59.539176] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.028 [2024-11-29 12:01:59.539191] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:17:54.287 12:01:59 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:54.287 12:01:59 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.545 12:01:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:54.545 12:01:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:54.545 12:01:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.545 12:01:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:54.804 12:02:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.804 12:02:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:55.062 12:02:00 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:55.062 12:02:00 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:55.321 12:02:00 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:55.321 12:02:00 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:55.321 12:02:00 -- common/autotest_common.sh@650 -- # local es=0 00:17:55.321 12:02:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:55.321 12:02:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.321 12:02:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.321 12:02:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.321 12:02:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.321 12:02:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.321 12:02:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.321 12:02:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.321 12:02:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:55.321 12:02:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:55.579 [2024-11-29 12:02:00.835136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:55.579 [2024-11-29 12:02:00.837459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:55.579 [2024-11-29 12:02:00.837565] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:55.579 [2024-11-29 12:02:00.837673] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:55.579 [2024-11-29 12:02:00.837718] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.579 [2024-11-29 12:02:00.837730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:17:55.579 request: 00:17:55.579 { 00:17:55.579 "name": "raid_bdev1", 00:17:55.579 "raid_level": "concat", 00:17:55.579 "base_bdevs": [ 00:17:55.579 "malloc1", 00:17:55.579 "malloc2" 00:17:55.579 ], 00:17:55.580 "superblock": false, 00:17:55.580 "strip_size_kb": 64, 00:17:55.580 "method": "bdev_raid_create", 00:17:55.580 "req_id": 1 00:17:55.580 } 00:17:55.580 Got JSON-RPC error response 00:17:55.580 response: 00:17:55.580 { 00:17:55.580 "code": -17, 00:17:55.580 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:55.580 } 00:17:55.580 12:02:00 -- common/autotest_common.sh@653 -- # es=1 00:17:55.580 12:02:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.580 12:02:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.580 12:02:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.580 12:02:00 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.580 12:02:00 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:55.580 12:02:01 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:55.580 12:02:01 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:55.580 12:02:01 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.838 [2024-11-29 12:02:01.303180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.838 [2024-11-29 12:02:01.303339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.838 [2024-11-29 12:02:01.303407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:55.838 [2024-11-29 12:02:01.303442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.838 [2024-11-29 12:02:01.306127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.838 [2024-11-29 12:02:01.306216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.838 [2024-11-29 12:02:01.306385] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:55.838 [2024-11-29 12:02:01.306461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.838 pt1 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.838 12:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.096 12:02:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.096 "name": "raid_bdev1", 00:17:56.096 "uuid": "334a04e7-cd1a-4906-baab-f65e22de5f12", 00:17:56.096 "strip_size_kb": 64, 00:17:56.096 "state": "configuring", 00:17:56.096 "raid_level": "concat", 00:17:56.096 "superblock": true, 00:17:56.096 "num_base_bdevs": 2, 00:17:56.096 "num_base_bdevs_discovered": 1, 00:17:56.096 "num_base_bdevs_operational": 2, 00:17:56.096 "base_bdevs_list": [ 00:17:56.096 { 00:17:56.096 "name": "pt1", 00:17:56.096 "uuid": "3f087b01-7d03-5257-9b98-478ef2bcaca8", 00:17:56.096 "is_configured": true, 00:17:56.096 "data_offset": 2048, 00:17:56.096 "data_size": 63488 00:17:56.096 }, 00:17:56.096 { 00:17:56.096 "name": null, 00:17:56.096 "uuid": "319c6d17-c0dc-54bf-bfb3-a34f8f840f1b", 00:17:56.096 "is_configured": false, 00:17:56.096 "data_offset": 2048, 00:17:56.096 "data_size": 63488 00:17:56.096 } 00:17:56.096 ] 00:17:56.096 }' 00:17:56.096 12:02:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.096 12:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.030 [2024-11-29 12:02:02.503916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.030 [2024-11-29 12:02:02.504064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.030 [2024-11-29 12:02:02.504109] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:57.030 [2024-11-29 12:02:02.504140] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.030 [2024-11-29 12:02:02.504665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.030 [2024-11-29 12:02:02.504717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.030 [2024-11-29 12:02:02.504815] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:57.030 [2024-11-29 12:02:02.504860] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.030 [2024-11-29 12:02:02.504990] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:17:57.030 [2024-11-29 12:02:02.505006] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:57.030 [2024-11-29 12:02:02.505095] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:57.030 [2024-11-29 12:02:02.505427] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:17:57.030 [2024-11-29 12:02:02.505451] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:17:57.030 [2024-11-29 12:02:02.505563] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.030 pt2 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:57.030 12:02:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.031 12:02:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.289 12:02:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.289 "name": "raid_bdev1", 00:17:57.289 "uuid": "334a04e7-cd1a-4906-baab-f65e22de5f12", 00:17:57.289 "strip_size_kb": 64, 00:17:57.289 "state": "online", 00:17:57.289 "raid_level": "concat", 00:17:57.289 "superblock": true, 00:17:57.289 "num_base_bdevs": 2, 00:17:57.289 "num_base_bdevs_discovered": 2, 00:17:57.289 "num_base_bdevs_operational": 2, 00:17:57.289 "base_bdevs_list": [ 00:17:57.289 { 00:17:57.289 "name": "pt1", 00:17:57.289 "uuid": "3f087b01-7d03-5257-9b98-478ef2bcaca8", 00:17:57.289 "is_configured": true, 00:17:57.289 "data_offset": 2048, 00:17:57.289 "data_size": 63488 00:17:57.289 }, 00:17:57.289 { 00:17:57.289 "name": "pt2", 00:17:57.289 "uuid": "319c6d17-c0dc-54bf-bfb3-a34f8f840f1b", 00:17:57.289 "is_configured": true, 00:17:57.289 "data_offset": 2048, 00:17:57.289 "data_size": 63488 00:17:57.289 } 00:17:57.289 ] 00:17:57.289 }' 00:17:57.289 12:02:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.289 12:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:58.224 [2024-11-29 12:02:03.652378] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@430 -- # '[' 334a04e7-cd1a-4906-baab-f65e22de5f12 '!=' 334a04e7-cd1a-4906-baab-f65e22de5f12 ']' 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:58.224 12:02:03 -- bdev/bdev_raid.sh@511 -- # killprocess 124897 00:17:58.224 12:02:03 -- common/autotest_common.sh@936 -- # '[' -z 124897 ']' 00:17:58.224 12:02:03 -- common/autotest_common.sh@940 -- # kill -0 124897 00:17:58.224 12:02:03 -- common/autotest_common.sh@941 -- # uname 00:17:58.224 12:02:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.224 12:02:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124897 00:17:58.224 12:02:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:58.224 12:02:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:58.224 12:02:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124897' 00:17:58.224 killing process with pid 124897 00:17:58.224 12:02:03 -- common/autotest_common.sh@955 -- # kill 124897 00:17:58.224 [2024-11-29 12:02:03.698963] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.224 [2024-11-29 12:02:03.699071] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.224 [2024-11-29 12:02:03.699131] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.224 [2024-11-29 12:02:03.699149] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:17:58.224 12:02:03 -- common/autotest_common.sh@960 -- # wait 124897 00:17:58.225 [2024-11-29 12:02:03.725595] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.483 12:02:03 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:58.483 00:17:58.483 real 0m8.212s 00:17:58.483 user 0m14.765s 00:17:58.483 sys 0m1.130s 00:17:58.483 12:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:58.483 12:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:58.483 ************************************ 00:17:58.483 END TEST raid_superblock_test 00:17:58.483 ************************************ 00:17:58.742 12:02:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:58.742 12:02:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:17:58.742 12:02:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:58.742 12:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.742 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:17:58.742 ************************************ 00:17:58.742 START TEST raid_state_function_test 00:17:58.742 ************************************ 00:17:58.742 12:02:04 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=125144 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125144' 00:17:58.743 Process raid pid: 125144 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125144 /var/tmp/spdk-raid.sock 00:17:58.743 12:02:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:58.743 12:02:04 -- common/autotest_common.sh@829 -- # '[' -z 125144 ']' 00:17:58.743 12:02:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:58.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:58.743 12:02:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.743 12:02:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:58.743 12:02:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.743 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:17:58.743 [2024-11-29 12:02:04.095681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:58.743 [2024-11-29 12:02:04.095923] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.743 [2024-11-29 12:02:04.239948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.001 [2024-11-29 12:02:04.326400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.001 [2024-11-29 12:02:04.379383] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.635 12:02:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.635 12:02:05 -- common/autotest_common.sh@862 -- # return 0 00:17:59.635 12:02:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:59.893 [2024-11-29 12:02:05.255794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.893 [2024-11-29 12:02:05.255915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.893 [2024-11-29 12:02:05.255932] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.893 [2024-11-29 12:02:05.255953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.893 12:02:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.151 12:02:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.151 "name": "Existed_Raid", 00:18:00.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.151 "strip_size_kb": 0, 00:18:00.151 "state": "configuring", 00:18:00.151 "raid_level": "raid1", 00:18:00.151 "superblock": false, 00:18:00.151 "num_base_bdevs": 2, 00:18:00.151 "num_base_bdevs_discovered": 0, 00:18:00.151 "num_base_bdevs_operational": 2, 00:18:00.151 "base_bdevs_list": [ 00:18:00.151 { 00:18:00.151 "name": "BaseBdev1", 00:18:00.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.151 "is_configured": false, 00:18:00.151 "data_offset": 0, 00:18:00.151 "data_size": 0 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev2", 00:18:00.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.151 "is_configured": false, 00:18:00.151 "data_offset": 0, 00:18:00.151 "data_size": 0 00:18:00.151 } 00:18:00.151 ] 00:18:00.151 }' 00:18:00.151 12:02:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.151 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:18:01.087 12:02:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:01.087 [2024-11-29 12:02:06.499913] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.087 [2024-11-29 12:02:06.499998] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:01.087 12:02:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:01.346 [2024-11-29 12:02:06.767999] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.346 [2024-11-29 12:02:06.768108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.346 [2024-11-29 12:02:06.768123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.346 [2024-11-29 12:02:06.768150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.346 12:02:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:01.605 [2024-11-29 12:02:07.023938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.605 BaseBdev1 00:18:01.605 12:02:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:01.605 12:02:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:01.605 12:02:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:01.605 12:02:07 -- common/autotest_common.sh@899 -- # local i 00:18:01.605 12:02:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:01.605 12:02:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:01.605 12:02:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:01.863 12:02:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:02.121 [ 00:18:02.121 { 00:18:02.121 "name": "BaseBdev1", 00:18:02.121 "aliases": [ 00:18:02.121 "451b420e-b265-401b-ad5b-5ec96d8c4957" 00:18:02.121 ], 00:18:02.121 "product_name": "Malloc disk", 00:18:02.121 "block_size": 512, 00:18:02.121 "num_blocks": 65536, 00:18:02.121 "uuid": "451b420e-b265-401b-ad5b-5ec96d8c4957", 00:18:02.121 "assigned_rate_limits": { 00:18:02.121 "rw_ios_per_sec": 0, 00:18:02.121 "rw_mbytes_per_sec": 0, 00:18:02.121 "r_mbytes_per_sec": 0, 00:18:02.121 "w_mbytes_per_sec": 0 00:18:02.121 }, 00:18:02.121 "claimed": true, 00:18:02.121 "claim_type": "exclusive_write", 00:18:02.121 "zoned": false, 00:18:02.122 "supported_io_types": { 00:18:02.122 "read": true, 00:18:02.122 "write": true, 00:18:02.122 "unmap": true, 00:18:02.122 "write_zeroes": true, 00:18:02.122 "flush": true, 00:18:02.122 "reset": true, 00:18:02.122 "compare": false, 00:18:02.122 "compare_and_write": false, 00:18:02.122 "abort": true, 00:18:02.122 "nvme_admin": false, 00:18:02.122 "nvme_io": false 00:18:02.122 }, 00:18:02.122 "memory_domains": [ 00:18:02.122 { 00:18:02.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.122 "dma_device_type": 2 00:18:02.122 } 00:18:02.122 ], 00:18:02.122 "driver_specific": {} 00:18:02.122 } 00:18:02.122 ] 00:18:02.122 12:02:07 -- common/autotest_common.sh@905 -- # return 0 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.122 12:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.381 12:02:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.381 "name": "Existed_Raid", 00:18:02.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.381 "strip_size_kb": 0, 00:18:02.381 "state": "configuring", 00:18:02.381 "raid_level": "raid1", 00:18:02.381 "superblock": false, 00:18:02.381 "num_base_bdevs": 2, 00:18:02.381 "num_base_bdevs_discovered": 1, 00:18:02.381 "num_base_bdevs_operational": 2, 00:18:02.381 "base_bdevs_list": [ 00:18:02.381 { 00:18:02.381 "name": "BaseBdev1", 00:18:02.381 "uuid": "451b420e-b265-401b-ad5b-5ec96d8c4957", 00:18:02.381 "is_configured": true, 00:18:02.381 "data_offset": 0, 00:18:02.381 "data_size": 65536 00:18:02.381 }, 00:18:02.381 { 00:18:02.381 "name": "BaseBdev2", 00:18:02.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.381 "is_configured": false, 00:18:02.381 "data_offset": 0, 00:18:02.381 "data_size": 0 00:18:02.381 } 00:18:02.381 ] 00:18:02.381 }' 00:18:02.381 12:02:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.381 12:02:07 -- common/autotest_common.sh@10 -- # set +x 00:18:02.947 12:02:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:03.205 [2024-11-29 12:02:08.640355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.205 [2024-11-29 12:02:08.640434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:03.205 12:02:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:03.205 12:02:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:03.464 [2024-11-29 12:02:08.912505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.464 [2024-11-29 12:02:08.914902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.464 [2024-11-29 12:02:08.914988] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.464 12:02:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.722 12:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.722 "name": "Existed_Raid", 00:18:03.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.722 "strip_size_kb": 0, 00:18:03.722 "state": "configuring", 00:18:03.722 "raid_level": "raid1", 00:18:03.722 "superblock": false, 00:18:03.722 "num_base_bdevs": 2, 00:18:03.722 "num_base_bdevs_discovered": 1, 00:18:03.722 "num_base_bdevs_operational": 2, 00:18:03.722 "base_bdevs_list": [ 00:18:03.722 { 00:18:03.722 "name": "BaseBdev1", 00:18:03.722 "uuid": "451b420e-b265-401b-ad5b-5ec96d8c4957", 00:18:03.722 "is_configured": true, 00:18:03.722 "data_offset": 0, 00:18:03.722 "data_size": 65536 00:18:03.722 }, 00:18:03.722 { 00:18:03.722 "name": "BaseBdev2", 00:18:03.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.722 "is_configured": false, 00:18:03.722 "data_offset": 0, 00:18:03.722 "data_size": 0 00:18:03.722 } 00:18:03.722 ] 00:18:03.722 }' 00:18:03.722 12:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.722 12:02:09 -- common/autotest_common.sh@10 -- # set +x 00:18:04.654 12:02:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:04.654 [2024-11-29 12:02:10.106518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.654 [2024-11-29 12:02:10.106601] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:04.654 [2024-11-29 12:02:10.106618] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:04.654 [2024-11-29 12:02:10.106823] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:18:04.654 [2024-11-29 12:02:10.107406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:04.654 [2024-11-29 12:02:10.107437] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:04.654 [2024-11-29 12:02:10.107794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.654 BaseBdev2 00:18:04.654 12:02:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:04.654 12:02:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:04.654 12:02:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:04.654 12:02:10 -- common/autotest_common.sh@899 -- # local i 00:18:04.654 12:02:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:04.654 12:02:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:04.654 12:02:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.911 12:02:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:05.169 [ 00:18:05.169 { 00:18:05.169 "name": "BaseBdev2", 00:18:05.169 "aliases": [ 00:18:05.169 "a57e4dbf-cbc6-4094-b147-8979715e8da5" 00:18:05.169 ], 00:18:05.169 "product_name": "Malloc disk", 00:18:05.169 "block_size": 512, 00:18:05.169 "num_blocks": 65536, 00:18:05.169 "uuid": "a57e4dbf-cbc6-4094-b147-8979715e8da5", 00:18:05.169 "assigned_rate_limits": { 00:18:05.169 "rw_ios_per_sec": 0, 00:18:05.169 "rw_mbytes_per_sec": 0, 00:18:05.169 "r_mbytes_per_sec": 0, 00:18:05.169 "w_mbytes_per_sec": 0 00:18:05.169 }, 00:18:05.169 "claimed": true, 00:18:05.169 "claim_type": "exclusive_write", 00:18:05.169 "zoned": false, 00:18:05.169 "supported_io_types": { 00:18:05.169 "read": true, 00:18:05.169 "write": true, 00:18:05.169 "unmap": true, 00:18:05.169 "write_zeroes": true, 00:18:05.169 "flush": true, 00:18:05.169 "reset": true, 00:18:05.169 "compare": false, 00:18:05.170 "compare_and_write": false, 00:18:05.170 "abort": true, 00:18:05.170 "nvme_admin": false, 00:18:05.170 "nvme_io": false 00:18:05.170 }, 00:18:05.170 "memory_domains": [ 00:18:05.170 { 00:18:05.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.170 "dma_device_type": 2 00:18:05.170 } 00:18:05.170 ], 00:18:05.170 "driver_specific": {} 00:18:05.170 } 00:18:05.170 ] 00:18:05.170 12:02:10 -- common/autotest_common.sh@905 -- # return 0 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.170 12:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.516 12:02:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.516 "name": "Existed_Raid", 00:18:05.516 "uuid": "b2bc48d1-0644-4bf5-98ee-ab41fe3214d6", 00:18:05.516 "strip_size_kb": 0, 00:18:05.516 "state": "online", 00:18:05.516 "raid_level": "raid1", 00:18:05.516 "superblock": false, 00:18:05.516 "num_base_bdevs": 2, 00:18:05.516 "num_base_bdevs_discovered": 2, 00:18:05.516 "num_base_bdevs_operational": 2, 00:18:05.516 "base_bdevs_list": [ 00:18:05.516 { 00:18:05.516 "name": "BaseBdev1", 00:18:05.516 "uuid": "451b420e-b265-401b-ad5b-5ec96d8c4957", 00:18:05.516 "is_configured": true, 00:18:05.516 "data_offset": 0, 00:18:05.516 "data_size": 65536 00:18:05.516 }, 00:18:05.516 { 00:18:05.516 "name": "BaseBdev2", 00:18:05.516 "uuid": "a57e4dbf-cbc6-4094-b147-8979715e8da5", 00:18:05.516 "is_configured": true, 00:18:05.516 "data_offset": 0, 00:18:05.516 "data_size": 65536 00:18:05.516 } 00:18:05.516 ] 00:18:05.516 }' 00:18:05.516 12:02:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.516 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:18:06.082 12:02:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:06.341 [2024-11-29 12:02:11.847187] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.599 12:02:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.857 12:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.857 "name": "Existed_Raid", 00:18:06.857 "uuid": "b2bc48d1-0644-4bf5-98ee-ab41fe3214d6", 00:18:06.857 "strip_size_kb": 0, 00:18:06.857 "state": "online", 00:18:06.857 "raid_level": "raid1", 00:18:06.857 "superblock": false, 00:18:06.857 "num_base_bdevs": 2, 00:18:06.857 "num_base_bdevs_discovered": 1, 00:18:06.857 "num_base_bdevs_operational": 1, 00:18:06.857 "base_bdevs_list": [ 00:18:06.857 { 00:18:06.857 "name": null, 00:18:06.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.857 "is_configured": false, 00:18:06.857 "data_offset": 0, 00:18:06.857 "data_size": 65536 00:18:06.857 }, 00:18:06.857 { 00:18:06.857 "name": "BaseBdev2", 00:18:06.857 "uuid": "a57e4dbf-cbc6-4094-b147-8979715e8da5", 00:18:06.857 "is_configured": true, 00:18:06.857 "data_offset": 0, 00:18:06.857 "data_size": 65536 00:18:06.857 } 00:18:06.857 ] 00:18:06.857 }' 00:18:06.857 12:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.857 12:02:12 -- common/autotest_common.sh@10 -- # set +x 00:18:07.425 12:02:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:07.425 12:02:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:07.425 12:02:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.425 12:02:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:07.682 12:02:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:07.682 12:02:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.682 12:02:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:07.939 [2024-11-29 12:02:13.304672] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.939 [2024-11-29 12:02:13.304717] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.939 [2024-11-29 12:02:13.304803] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.939 [2024-11-29 12:02:13.316821] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.939 [2024-11-29 12:02:13.316867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:07.939 12:02:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:07.939 12:02:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:07.939 12:02:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.939 12:02:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:08.195 12:02:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:08.196 12:02:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:08.196 12:02:13 -- bdev/bdev_raid.sh@287 -- # killprocess 125144 00:18:08.196 12:02:13 -- common/autotest_common.sh@936 -- # '[' -z 125144 ']' 00:18:08.196 12:02:13 -- common/autotest_common.sh@940 -- # kill -0 125144 00:18:08.196 12:02:13 -- common/autotest_common.sh@941 -- # uname 00:18:08.196 12:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.196 12:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125144 00:18:08.196 12:02:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.196 12:02:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.196 12:02:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125144' 00:18:08.196 killing process with pid 125144 00:18:08.196 12:02:13 -- common/autotest_common.sh@955 -- # kill 125144 00:18:08.196 [2024-11-29 12:02:13.621841] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.196 [2024-11-29 12:02:13.621945] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.196 12:02:13 -- common/autotest_common.sh@960 -- # wait 125144 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:08.454 00:18:08.454 real 0m9.828s 00:18:08.454 user 0m17.955s 00:18:08.454 sys 0m1.262s 00:18:08.454 12:02:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:08.454 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.454 ************************************ 00:18:08.454 END TEST raid_state_function_test 00:18:08.454 ************************************ 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:18:08.454 12:02:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:08.454 12:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.454 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.454 ************************************ 00:18:08.454 START TEST raid_state_function_test_sb 00:18:08.454 ************************************ 00:18:08.454 12:02:13 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=125463 00:18:08.454 Process raid pid: 125463 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125463' 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125463 /var/tmp/spdk-raid.sock 00:18:08.454 12:02:13 -- common/autotest_common.sh@829 -- # '[' -z 125463 ']' 00:18:08.454 12:02:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:08.454 12:02:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:08.454 12:02:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.454 12:02:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:08.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:08.454 12:02:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.454 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.713 [2024-11-29 12:02:13.991444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:08.713 [2024-11-29 12:02:13.991720] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.713 [2024-11-29 12:02:14.141030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.971 [2024-11-29 12:02:14.228619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.971 [2024-11-29 12:02:14.282706] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.537 12:02:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.537 12:02:14 -- common/autotest_common.sh@862 -- # return 0 00:18:09.537 12:02:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:09.795 [2024-11-29 12:02:15.230712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.795 [2024-11-29 12:02:15.230820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.795 [2024-11-29 12:02:15.230835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.795 [2024-11-29 12:02:15.230855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.795 12:02:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.053 12:02:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.053 "name": "Existed_Raid", 00:18:10.053 "uuid": "82e39295-25a9-4dd9-a50b-a1d7317dec9d", 00:18:10.053 "strip_size_kb": 0, 00:18:10.053 "state": "configuring", 00:18:10.053 "raid_level": "raid1", 00:18:10.053 "superblock": true, 00:18:10.053 "num_base_bdevs": 2, 00:18:10.053 "num_base_bdevs_discovered": 0, 00:18:10.053 "num_base_bdevs_operational": 2, 00:18:10.053 "base_bdevs_list": [ 00:18:10.053 { 00:18:10.053 "name": "BaseBdev1", 00:18:10.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.053 "is_configured": false, 00:18:10.053 "data_offset": 0, 00:18:10.053 "data_size": 0 00:18:10.053 }, 00:18:10.053 { 00:18:10.053 "name": "BaseBdev2", 00:18:10.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.053 "is_configured": false, 00:18:10.053 "data_offset": 0, 00:18:10.053 "data_size": 0 00:18:10.053 } 00:18:10.053 ] 00:18:10.053 }' 00:18:10.053 12:02:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.053 12:02:15 -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 12:02:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:10.986 [2024-11-29 12:02:16.410869] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.986 [2024-11-29 12:02:16.410936] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:10.986 12:02:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:11.245 [2024-11-29 12:02:16.698960] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.245 [2024-11-29 12:02:16.699060] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.245 [2024-11-29 12:02:16.699076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.245 [2024-11-29 12:02:16.699101] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.245 12:02:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.504 [2024-11-29 12:02:16.978912] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.504 BaseBdev1 00:18:11.504 12:02:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:11.504 12:02:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:11.504 12:02:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:11.504 12:02:16 -- common/autotest_common.sh@899 -- # local i 00:18:11.504 12:02:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:11.504 12:02:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:11.504 12:02:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.071 12:02:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.071 [ 00:18:12.071 { 00:18:12.071 "name": "BaseBdev1", 00:18:12.071 "aliases": [ 00:18:12.071 "6a4fb459-0bd7-4a46-8b65-44002f153e5e" 00:18:12.071 ], 00:18:12.071 "product_name": "Malloc disk", 00:18:12.071 "block_size": 512, 00:18:12.071 "num_blocks": 65536, 00:18:12.071 "uuid": "6a4fb459-0bd7-4a46-8b65-44002f153e5e", 00:18:12.071 "assigned_rate_limits": { 00:18:12.071 "rw_ios_per_sec": 0, 00:18:12.071 "rw_mbytes_per_sec": 0, 00:18:12.071 "r_mbytes_per_sec": 0, 00:18:12.071 "w_mbytes_per_sec": 0 00:18:12.071 }, 00:18:12.071 "claimed": true, 00:18:12.071 "claim_type": "exclusive_write", 00:18:12.071 "zoned": false, 00:18:12.071 "supported_io_types": { 00:18:12.071 "read": true, 00:18:12.071 "write": true, 00:18:12.071 "unmap": true, 00:18:12.071 "write_zeroes": true, 00:18:12.071 "flush": true, 00:18:12.071 "reset": true, 00:18:12.071 "compare": false, 00:18:12.071 "compare_and_write": false, 00:18:12.071 "abort": true, 00:18:12.071 "nvme_admin": false, 00:18:12.071 "nvme_io": false 00:18:12.071 }, 00:18:12.071 "memory_domains": [ 00:18:12.071 { 00:18:12.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.071 "dma_device_type": 2 00:18:12.071 } 00:18:12.071 ], 00:18:12.071 "driver_specific": {} 00:18:12.071 } 00:18:12.071 ] 00:18:12.071 12:02:17 -- common/autotest_common.sh@905 -- # return 0 00:18:12.071 12:02:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.071 12:02:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.071 12:02:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.071 12:02:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.071 12:02:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.071 12:02:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:12.072 12:02:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.072 12:02:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.072 12:02:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.072 12:02:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.072 12:02:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.072 12:02:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.331 12:02:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.331 "name": "Existed_Raid", 00:18:12.331 "uuid": "b3e64abf-4e0b-4b73-8d52-423833d712f6", 00:18:12.331 "strip_size_kb": 0, 00:18:12.331 "state": "configuring", 00:18:12.331 "raid_level": "raid1", 00:18:12.331 "superblock": true, 00:18:12.331 "num_base_bdevs": 2, 00:18:12.331 "num_base_bdevs_discovered": 1, 00:18:12.331 "num_base_bdevs_operational": 2, 00:18:12.331 "base_bdevs_list": [ 00:18:12.331 { 00:18:12.331 "name": "BaseBdev1", 00:18:12.331 "uuid": "6a4fb459-0bd7-4a46-8b65-44002f153e5e", 00:18:12.331 "is_configured": true, 00:18:12.331 "data_offset": 2048, 00:18:12.331 "data_size": 63488 00:18:12.331 }, 00:18:12.331 { 00:18:12.331 "name": "BaseBdev2", 00:18:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.331 "is_configured": false, 00:18:12.331 "data_offset": 0, 00:18:12.331 "data_size": 0 00:18:12.331 } 00:18:12.331 ] 00:18:12.331 }' 00:18:12.331 12:02:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.331 12:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:13.267 12:02:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:13.526 [2024-11-29 12:02:18.787389] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.526 [2024-11-29 12:02:18.787497] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:13.526 12:02:18 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:13.526 12:02:18 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:13.784 12:02:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:14.043 BaseBdev1 00:18:14.043 12:02:19 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:14.043 12:02:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:14.043 12:02:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:14.043 12:02:19 -- common/autotest_common.sh@899 -- # local i 00:18:14.043 12:02:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:14.043 12:02:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:14.043 12:02:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.302 12:02:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:14.561 [ 00:18:14.561 { 00:18:14.561 "name": "BaseBdev1", 00:18:14.561 "aliases": [ 00:18:14.561 "fc0e1fcd-c70f-49eb-87c8-6d06ceefcafc" 00:18:14.561 ], 00:18:14.561 "product_name": "Malloc disk", 00:18:14.561 "block_size": 512, 00:18:14.561 "num_blocks": 65536, 00:18:14.561 "uuid": "fc0e1fcd-c70f-49eb-87c8-6d06ceefcafc", 00:18:14.561 "assigned_rate_limits": { 00:18:14.561 "rw_ios_per_sec": 0, 00:18:14.561 "rw_mbytes_per_sec": 0, 00:18:14.561 "r_mbytes_per_sec": 0, 00:18:14.561 "w_mbytes_per_sec": 0 00:18:14.561 }, 00:18:14.561 "claimed": false, 00:18:14.561 "zoned": false, 00:18:14.561 "supported_io_types": { 00:18:14.561 "read": true, 00:18:14.561 "write": true, 00:18:14.561 "unmap": true, 00:18:14.561 "write_zeroes": true, 00:18:14.561 "flush": true, 00:18:14.561 "reset": true, 00:18:14.561 "compare": false, 00:18:14.561 "compare_and_write": false, 00:18:14.561 "abort": true, 00:18:14.561 "nvme_admin": false, 00:18:14.561 "nvme_io": false 00:18:14.561 }, 00:18:14.561 "memory_domains": [ 00:18:14.561 { 00:18:14.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.561 "dma_device_type": 2 00:18:14.561 } 00:18:14.561 ], 00:18:14.561 "driver_specific": {} 00:18:14.561 } 00:18:14.561 ] 00:18:14.561 12:02:19 -- common/autotest_common.sh@905 -- # return 0 00:18:14.561 12:02:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:14.820 [2024-11-29 12:02:20.105354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.820 [2024-11-29 12:02:20.107684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.820 [2024-11-29 12:02:20.107767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.820 12:02:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.821 12:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.079 12:02:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.079 "name": "Existed_Raid", 00:18:15.079 "uuid": "02ba68d6-dbf9-4517-b8ee-c7721f460a10", 00:18:15.079 "strip_size_kb": 0, 00:18:15.079 "state": "configuring", 00:18:15.079 "raid_level": "raid1", 00:18:15.079 "superblock": true, 00:18:15.079 "num_base_bdevs": 2, 00:18:15.079 "num_base_bdevs_discovered": 1, 00:18:15.079 "num_base_bdevs_operational": 2, 00:18:15.079 "base_bdevs_list": [ 00:18:15.079 { 00:18:15.079 "name": "BaseBdev1", 00:18:15.079 "uuid": "fc0e1fcd-c70f-49eb-87c8-6d06ceefcafc", 00:18:15.079 "is_configured": true, 00:18:15.079 "data_offset": 2048, 00:18:15.079 "data_size": 63488 00:18:15.079 }, 00:18:15.079 { 00:18:15.079 "name": "BaseBdev2", 00:18:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.079 "is_configured": false, 00:18:15.079 "data_offset": 0, 00:18:15.079 "data_size": 0 00:18:15.079 } 00:18:15.079 ] 00:18:15.079 }' 00:18:15.079 12:02:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.079 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:18:15.646 12:02:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:15.904 [2024-11-29 12:02:21.306202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.904 [2024-11-29 12:02:21.306484] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:18:15.904 [2024-11-29 12:02:21.306502] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:15.904 [2024-11-29 12:02:21.306643] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:15.904 [2024-11-29 12:02:21.307100] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:18:15.904 [2024-11-29 12:02:21.307127] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:18:15.904 [2024-11-29 12:02:21.307293] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.904 BaseBdev2 00:18:15.904 12:02:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:15.904 12:02:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:15.904 12:02:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:15.904 12:02:21 -- common/autotest_common.sh@899 -- # local i 00:18:15.904 12:02:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:15.904 12:02:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:15.904 12:02:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.163 12:02:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.421 [ 00:18:16.421 { 00:18:16.421 "name": "BaseBdev2", 00:18:16.421 "aliases": [ 00:18:16.421 "fae7fbf5-584c-46e2-9b65-0b212eb60df1" 00:18:16.421 ], 00:18:16.421 "product_name": "Malloc disk", 00:18:16.421 "block_size": 512, 00:18:16.421 "num_blocks": 65536, 00:18:16.421 "uuid": "fae7fbf5-584c-46e2-9b65-0b212eb60df1", 00:18:16.421 "assigned_rate_limits": { 00:18:16.421 "rw_ios_per_sec": 0, 00:18:16.421 "rw_mbytes_per_sec": 0, 00:18:16.421 "r_mbytes_per_sec": 0, 00:18:16.421 "w_mbytes_per_sec": 0 00:18:16.421 }, 00:18:16.421 "claimed": true, 00:18:16.421 "claim_type": "exclusive_write", 00:18:16.421 "zoned": false, 00:18:16.421 "supported_io_types": { 00:18:16.421 "read": true, 00:18:16.421 "write": true, 00:18:16.421 "unmap": true, 00:18:16.421 "write_zeroes": true, 00:18:16.421 "flush": true, 00:18:16.421 "reset": true, 00:18:16.421 "compare": false, 00:18:16.421 "compare_and_write": false, 00:18:16.421 "abort": true, 00:18:16.421 "nvme_admin": false, 00:18:16.421 "nvme_io": false 00:18:16.421 }, 00:18:16.421 "memory_domains": [ 00:18:16.421 { 00:18:16.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.421 "dma_device_type": 2 00:18:16.421 } 00:18:16.421 ], 00:18:16.421 "driver_specific": {} 00:18:16.421 } 00:18:16.421 ] 00:18:16.421 12:02:21 -- common/autotest_common.sh@905 -- # return 0 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.421 12:02:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.680 12:02:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.680 "name": "Existed_Raid", 00:18:16.680 "uuid": "02ba68d6-dbf9-4517-b8ee-c7721f460a10", 00:18:16.680 "strip_size_kb": 0, 00:18:16.680 "state": "online", 00:18:16.680 "raid_level": "raid1", 00:18:16.680 "superblock": true, 00:18:16.680 "num_base_bdevs": 2, 00:18:16.680 "num_base_bdevs_discovered": 2, 00:18:16.680 "num_base_bdevs_operational": 2, 00:18:16.680 "base_bdevs_list": [ 00:18:16.680 { 00:18:16.680 "name": "BaseBdev1", 00:18:16.680 "uuid": "fc0e1fcd-c70f-49eb-87c8-6d06ceefcafc", 00:18:16.680 "is_configured": true, 00:18:16.680 "data_offset": 2048, 00:18:16.680 "data_size": 63488 00:18:16.680 }, 00:18:16.680 { 00:18:16.680 "name": "BaseBdev2", 00:18:16.680 "uuid": "fae7fbf5-584c-46e2-9b65-0b212eb60df1", 00:18:16.680 "is_configured": true, 00:18:16.680 "data_offset": 2048, 00:18:16.680 "data_size": 63488 00:18:16.680 } 00:18:16.680 ] 00:18:16.680 }' 00:18:16.680 12:02:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.680 12:02:22 -- common/autotest_common.sh@10 -- # set +x 00:18:17.614 12:02:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:17.614 [2024-11-29 12:02:23.063245] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.614 12:02:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.872 12:02:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.872 "name": "Existed_Raid", 00:18:17.872 "uuid": "02ba68d6-dbf9-4517-b8ee-c7721f460a10", 00:18:17.872 "strip_size_kb": 0, 00:18:17.872 "state": "online", 00:18:17.872 "raid_level": "raid1", 00:18:17.872 "superblock": true, 00:18:17.872 "num_base_bdevs": 2, 00:18:17.872 "num_base_bdevs_discovered": 1, 00:18:17.872 "num_base_bdevs_operational": 1, 00:18:17.872 "base_bdevs_list": [ 00:18:17.872 { 00:18:17.872 "name": null, 00:18:17.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.872 "is_configured": false, 00:18:17.872 "data_offset": 2048, 00:18:17.872 "data_size": 63488 00:18:17.872 }, 00:18:17.872 { 00:18:17.872 "name": "BaseBdev2", 00:18:17.872 "uuid": "fae7fbf5-584c-46e2-9b65-0b212eb60df1", 00:18:17.872 "is_configured": true, 00:18:17.872 "data_offset": 2048, 00:18:17.872 "data_size": 63488 00:18:17.872 } 00:18:17.872 ] 00:18:17.872 }' 00:18:17.872 12:02:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.872 12:02:23 -- common/autotest_common.sh@10 -- # set +x 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.807 12:02:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:19.065 [2024-11-29 12:02:24.512553] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.065 [2024-11-29 12:02:24.512618] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.065 [2024-11-29 12:02:24.512742] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.065 [2024-11-29 12:02:24.526266] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.065 [2024-11-29 12:02:24.526296] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:18:19.065 12:02:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:19.065 12:02:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:19.065 12:02:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.065 12:02:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:19.323 12:02:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:19.323 12:02:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:19.323 12:02:24 -- bdev/bdev_raid.sh@287 -- # killprocess 125463 00:18:19.323 12:02:24 -- common/autotest_common.sh@936 -- # '[' -z 125463 ']' 00:18:19.323 12:02:24 -- common/autotest_common.sh@940 -- # kill -0 125463 00:18:19.323 12:02:24 -- common/autotest_common.sh@941 -- # uname 00:18:19.323 12:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.582 12:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125463 00:18:19.582 killing process with pid 125463 00:18:19.582 12:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.582 12:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.582 12:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125463' 00:18:19.582 12:02:24 -- common/autotest_common.sh@955 -- # kill 125463 00:18:19.582 12:02:24 -- common/autotest_common.sh@960 -- # wait 125463 00:18:19.582 [2024-11-29 12:02:24.853584] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.582 [2024-11-29 12:02:24.853769] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.839 ************************************ 00:18:19.839 END TEST raid_state_function_test_sb 00:18:19.839 ************************************ 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:19.839 00:18:19.839 real 0m11.241s 00:18:19.839 user 0m20.426s 00:18:19.839 sys 0m1.505s 00:18:19.839 12:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:19.839 12:02:25 -- common/autotest_common.sh@10 -- # set +x 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:18:19.839 12:02:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:19.839 12:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.839 12:02:25 -- common/autotest_common.sh@10 -- # set +x 00:18:19.839 ************************************ 00:18:19.839 START TEST raid_superblock_test 00:18:19.839 ************************************ 00:18:19.839 12:02:25 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=125799 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:19.839 12:02:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125799 /var/tmp/spdk-raid.sock 00:18:19.839 12:02:25 -- common/autotest_common.sh@829 -- # '[' -z 125799 ']' 00:18:19.839 12:02:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.839 12:02:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.839 12:02:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.839 12:02:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.839 12:02:25 -- common/autotest_common.sh@10 -- # set +x 00:18:19.839 [2024-11-29 12:02:25.282171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:19.839 [2024-11-29 12:02:25.282402] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125799 ] 00:18:20.097 [2024-11-29 12:02:25.423498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.097 [2024-11-29 12:02:25.521734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.097 [2024-11-29 12:02:25.580988] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.031 12:02:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.031 12:02:26 -- common/autotest_common.sh@862 -- # return 0 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.031 12:02:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:21.031 malloc1 00:18:21.289 12:02:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.548 [2024-11-29 12:02:26.809226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.548 [2024-11-29 12:02:26.809363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.548 [2024-11-29 12:02:26.809418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:21.548 [2024-11-29 12:02:26.809477] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.548 [2024-11-29 12:02:26.812272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.548 [2024-11-29 12:02:26.812336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.548 pt1 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.548 12:02:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.549 12:02:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.549 12:02:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:21.549 malloc2 00:18:21.807 12:02:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.807 [2024-11-29 12:02:27.276754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.807 [2024-11-29 12:02:27.276856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.807 [2024-11-29 12:02:27.276907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:21.807 [2024-11-29 12:02:27.276956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.807 [2024-11-29 12:02:27.279531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.807 [2024-11-29 12:02:27.279586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.807 pt2 00:18:21.807 12:02:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:21.807 12:02:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:21.807 12:02:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:22.065 [2024-11-29 12:02:27.520874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.065 [2024-11-29 12:02:27.523255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.065 [2024-11-29 12:02:27.523527] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:18:22.065 [2024-11-29 12:02:27.523551] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:22.065 [2024-11-29 12:02:27.523712] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:22.065 [2024-11-29 12:02:27.524158] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:18:22.065 [2024-11-29 12:02:27.524183] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:18:22.065 [2024-11-29 12:02:27.524377] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.065 12:02:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.323 12:02:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.323 "name": "raid_bdev1", 00:18:22.323 "uuid": "274751b4-c2b0-4bfd-b2ac-09d3a61a95dc", 00:18:22.323 "strip_size_kb": 0, 00:18:22.323 "state": "online", 00:18:22.323 "raid_level": "raid1", 00:18:22.323 "superblock": true, 00:18:22.323 "num_base_bdevs": 2, 00:18:22.323 "num_base_bdevs_discovered": 2, 00:18:22.323 "num_base_bdevs_operational": 2, 00:18:22.323 "base_bdevs_list": [ 00:18:22.323 { 00:18:22.323 "name": "pt1", 00:18:22.323 "uuid": "3a9946e7-e58a-5a05-bb56-bdd446f91f69", 00:18:22.323 "is_configured": true, 00:18:22.323 "data_offset": 2048, 00:18:22.323 "data_size": 63488 00:18:22.323 }, 00:18:22.323 { 00:18:22.323 "name": "pt2", 00:18:22.323 "uuid": "3c10eda4-9882-5218-8204-ad880e4369fd", 00:18:22.323 "is_configured": true, 00:18:22.323 "data_offset": 2048, 00:18:22.323 "data_size": 63488 00:18:22.323 } 00:18:22.323 ] 00:18:22.323 }' 00:18:22.323 12:02:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.323 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:18:23.256 12:02:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:23.256 12:02:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:23.256 [2024-11-29 12:02:28.709356] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.256 12:02:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=274751b4-c2b0-4bfd-b2ac-09d3a61a95dc 00:18:23.256 12:02:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 274751b4-c2b0-4bfd-b2ac-09d3a61a95dc ']' 00:18:23.256 12:02:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:23.514 [2024-11-29 12:02:28.977124] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.514 [2024-11-29 12:02:28.977166] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.514 [2024-11-29 12:02:28.977284] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.514 [2024-11-29 12:02:28.977374] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.514 [2024-11-29 12:02:28.977389] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:18:23.514 12:02:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:23.514 12:02:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.081 12:02:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:24.081 12:02:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:24.081 12:02:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:24.081 12:02:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:24.081 12:02:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:24.081 12:02:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:24.339 12:02:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:24.339 12:02:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:24.604 12:02:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:24.604 12:02:30 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:24.604 12:02:30 -- common/autotest_common.sh@650 -- # local es=0 00:18:24.604 12:02:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:24.604 12:02:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.604 12:02:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.604 12:02:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.604 12:02:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.604 12:02:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.604 12:02:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.604 12:02:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.604 12:02:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:24.604 12:02:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:24.862 [2024-11-29 12:02:30.349449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:24.862 [2024-11-29 12:02:30.351819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:24.862 [2024-11-29 12:02:30.351900] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:24.862 [2024-11-29 12:02:30.352007] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:24.862 [2024-11-29 12:02:30.352049] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.862 [2024-11-29 12:02:30.352061] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:18:24.862 request: 00:18:24.862 { 00:18:24.862 "name": "raid_bdev1", 00:18:24.862 "raid_level": "raid1", 00:18:24.862 "base_bdevs": [ 00:18:24.862 "malloc1", 00:18:24.862 "malloc2" 00:18:24.862 ], 00:18:24.862 "superblock": false, 00:18:24.862 "method": "bdev_raid_create", 00:18:24.862 "req_id": 1 00:18:24.862 } 00:18:24.862 Got JSON-RPC error response 00:18:24.862 response: 00:18:24.862 { 00:18:24.862 "code": -17, 00:18:24.862 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:24.862 } 00:18:24.862 12:02:30 -- common/autotest_common.sh@653 -- # es=1 00:18:24.862 12:02:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.862 12:02:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.862 12:02:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.120 12:02:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:25.120 12:02:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.379 12:02:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:25.379 12:02:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:25.379 12:02:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.637 [2024-11-29 12:02:30.921458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.637 [2024-11-29 12:02:30.921622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.637 [2024-11-29 12:02:30.921664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:25.637 [2024-11-29 12:02:30.921706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.637 [2024-11-29 12:02:30.924300] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.637 [2024-11-29 12:02:30.924360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.637 [2024-11-29 12:02:30.924462] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:25.637 [2024-11-29 12:02:30.924536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.637 pt1 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.637 12:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.896 12:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.896 "name": "raid_bdev1", 00:18:25.896 "uuid": "274751b4-c2b0-4bfd-b2ac-09d3a61a95dc", 00:18:25.896 "strip_size_kb": 0, 00:18:25.896 "state": "configuring", 00:18:25.896 "raid_level": "raid1", 00:18:25.896 "superblock": true, 00:18:25.896 "num_base_bdevs": 2, 00:18:25.896 "num_base_bdevs_discovered": 1, 00:18:25.896 "num_base_bdevs_operational": 2, 00:18:25.896 "base_bdevs_list": [ 00:18:25.896 { 00:18:25.896 "name": "pt1", 00:18:25.896 "uuid": "3a9946e7-e58a-5a05-bb56-bdd446f91f69", 00:18:25.896 "is_configured": true, 00:18:25.896 "data_offset": 2048, 00:18:25.896 "data_size": 63488 00:18:25.896 }, 00:18:25.896 { 00:18:25.896 "name": null, 00:18:25.896 "uuid": "3c10eda4-9882-5218-8204-ad880e4369fd", 00:18:25.896 "is_configured": false, 00:18:25.896 "data_offset": 2048, 00:18:25.896 "data_size": 63488 00:18:25.896 } 00:18:25.896 ] 00:18:25.896 }' 00:18:25.896 12:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.896 12:02:31 -- common/autotest_common.sh@10 -- # set +x 00:18:26.463 12:02:31 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:18:26.463 12:02:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:26.463 12:02:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.463 12:02:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.721 [2024-11-29 12:02:32.105747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.721 [2024-11-29 12:02:32.105879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.721 [2024-11-29 12:02:32.105922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:26.721 [2024-11-29 12:02:32.105952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.721 [2024-11-29 12:02:32.106467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.721 [2024-11-29 12:02:32.106517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.721 [2024-11-29 12:02:32.106611] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:26.721 [2024-11-29 12:02:32.106646] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.721 [2024-11-29 12:02:32.106788] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:18:26.721 [2024-11-29 12:02:32.106810] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:26.721 [2024-11-29 12:02:32.106900] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:18:26.721 [2024-11-29 12:02:32.107242] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:18:26.721 [2024-11-29 12:02:32.107266] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:18:26.721 [2024-11-29 12:02:32.107381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.721 pt2 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.721 12:02:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.979 12:02:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.979 "name": "raid_bdev1", 00:18:26.979 "uuid": "274751b4-c2b0-4bfd-b2ac-09d3a61a95dc", 00:18:26.979 "strip_size_kb": 0, 00:18:26.979 "state": "online", 00:18:26.979 "raid_level": "raid1", 00:18:26.979 "superblock": true, 00:18:26.979 "num_base_bdevs": 2, 00:18:26.979 "num_base_bdevs_discovered": 2, 00:18:26.979 "num_base_bdevs_operational": 2, 00:18:26.979 "base_bdevs_list": [ 00:18:26.979 { 00:18:26.979 "name": "pt1", 00:18:26.979 "uuid": "3a9946e7-e58a-5a05-bb56-bdd446f91f69", 00:18:26.979 "is_configured": true, 00:18:26.979 "data_offset": 2048, 00:18:26.979 "data_size": 63488 00:18:26.979 }, 00:18:26.979 { 00:18:26.979 "name": "pt2", 00:18:26.979 "uuid": "3c10eda4-9882-5218-8204-ad880e4369fd", 00:18:26.979 "is_configured": true, 00:18:26.979 "data_offset": 2048, 00:18:26.979 "data_size": 63488 00:18:26.979 } 00:18:26.979 ] 00:18:26.979 }' 00:18:26.979 12:02:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.979 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:18:27.546 12:02:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:27.546 12:02:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:27.804 [2024-11-29 12:02:33.243389] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.804 12:02:33 -- bdev/bdev_raid.sh@430 -- # '[' 274751b4-c2b0-4bfd-b2ac-09d3a61a95dc '!=' 274751b4-c2b0-4bfd-b2ac-09d3a61a95dc ']' 00:18:27.804 12:02:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:27.804 12:02:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:27.804 12:02:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:27.804 12:02:33 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:28.063 [2024-11-29 12:02:33.507218] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.063 12:02:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.321 12:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.321 "name": "raid_bdev1", 00:18:28.321 "uuid": "274751b4-c2b0-4bfd-b2ac-09d3a61a95dc", 00:18:28.321 "strip_size_kb": 0, 00:18:28.321 "state": "online", 00:18:28.321 "raid_level": "raid1", 00:18:28.321 "superblock": true, 00:18:28.321 "num_base_bdevs": 2, 00:18:28.321 "num_base_bdevs_discovered": 1, 00:18:28.321 "num_base_bdevs_operational": 1, 00:18:28.321 "base_bdevs_list": [ 00:18:28.321 { 00:18:28.321 "name": null, 00:18:28.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.321 "is_configured": false, 00:18:28.321 "data_offset": 2048, 00:18:28.321 "data_size": 63488 00:18:28.321 }, 00:18:28.321 { 00:18:28.321 "name": "pt2", 00:18:28.321 "uuid": "3c10eda4-9882-5218-8204-ad880e4369fd", 00:18:28.321 "is_configured": true, 00:18:28.321 "data_offset": 2048, 00:18:28.321 "data_size": 63488 00:18:28.321 } 00:18:28.321 ] 00:18:28.321 }' 00:18:28.321 12:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.321 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:18:29.256 12:02:34 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:29.514 [2024-11-29 12:02:34.783086] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.514 [2024-11-29 12:02:34.783143] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.514 [2024-11-29 12:02:34.783256] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.514 [2024-11-29 12:02:34.783326] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.514 [2024-11-29 12:02:34.783340] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:18:29.514 12:02:34 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.514 12:02:34 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:29.792 12:02:35 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:29.792 12:02:35 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:29.792 12:02:35 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:29.792 12:02:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:29.792 12:02:35 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:30.052 12:02:35 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:30.052 12:02:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:30.052 12:02:35 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:30.052 12:02:35 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:30.052 12:02:35 -- bdev/bdev_raid.sh@462 -- # i=1 00:18:30.052 12:02:35 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.310 [2024-11-29 12:02:35.599222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.310 [2024-11-29 12:02:35.599382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.310 [2024-11-29 12:02:35.599428] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:30.310 [2024-11-29 12:02:35.599460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.310 [2024-11-29 12:02:35.602339] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.310 [2024-11-29 12:02:35.602420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.310 [2024-11-29 12:02:35.602528] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:30.310 [2024-11-29 12:02:35.602572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.310 [2024-11-29 12:02:35.602698] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:30.310 [2024-11-29 12:02:35.602711] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:30.310 [2024-11-29 12:02:35.602797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:18:30.310 [2024-11-29 12:02:35.603147] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:30.310 [2024-11-29 12:02:35.603172] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:18:30.310 [2024-11-29 12:02:35.603337] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.310 pt2 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.310 12:02:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.568 12:02:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.568 "name": "raid_bdev1", 00:18:30.568 "uuid": "274751b4-c2b0-4bfd-b2ac-09d3a61a95dc", 00:18:30.568 "strip_size_kb": 0, 00:18:30.568 "state": "online", 00:18:30.568 "raid_level": "raid1", 00:18:30.568 "superblock": true, 00:18:30.568 "num_base_bdevs": 2, 00:18:30.568 "num_base_bdevs_discovered": 1, 00:18:30.568 "num_base_bdevs_operational": 1, 00:18:30.568 "base_bdevs_list": [ 00:18:30.568 { 00:18:30.568 "name": null, 00:18:30.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.568 "is_configured": false, 00:18:30.568 "data_offset": 2048, 00:18:30.568 "data_size": 63488 00:18:30.568 }, 00:18:30.568 { 00:18:30.568 "name": "pt2", 00:18:30.568 "uuid": "3c10eda4-9882-5218-8204-ad880e4369fd", 00:18:30.568 "is_configured": true, 00:18:30.568 "data_offset": 2048, 00:18:30.568 "data_size": 63488 00:18:30.568 } 00:18:30.568 ] 00:18:30.568 }' 00:18:30.568 12:02:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.568 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:18:31.135 12:02:36 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:18:31.135 12:02:36 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:31.135 12:02:36 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:31.393 [2024-11-29 12:02:36.851924] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.393 12:02:36 -- bdev/bdev_raid.sh@506 -- # '[' 274751b4-c2b0-4bfd-b2ac-09d3a61a95dc '!=' 274751b4-c2b0-4bfd-b2ac-09d3a61a95dc ']' 00:18:31.393 12:02:36 -- bdev/bdev_raid.sh@511 -- # killprocess 125799 00:18:31.393 12:02:36 -- common/autotest_common.sh@936 -- # '[' -z 125799 ']' 00:18:31.393 12:02:36 -- common/autotest_common.sh@940 -- # kill -0 125799 00:18:31.393 12:02:36 -- common/autotest_common.sh@941 -- # uname 00:18:31.393 12:02:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.393 12:02:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125799 00:18:31.393 killing process with pid 125799 00:18:31.393 12:02:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.393 12:02:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.393 12:02:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125799' 00:18:31.393 12:02:36 -- common/autotest_common.sh@955 -- # kill 125799 00:18:31.393 12:02:36 -- common/autotest_common.sh@960 -- # wait 125799 00:18:31.393 [2024-11-29 12:02:36.897237] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.393 [2024-11-29 12:02:36.897345] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.393 [2024-11-29 12:02:36.897413] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.393 [2024-11-29 12:02:36.897425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:18:31.652 [2024-11-29 12:02:36.925779] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.910 ************************************ 00:18:31.910 END TEST raid_superblock_test 00:18:31.910 ************************************ 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:31.910 00:18:31.910 real 0m12.011s 00:18:31.910 user 0m22.045s 00:18:31.910 sys 0m1.605s 00:18:31.910 12:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:31.910 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:18:31.910 12:02:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:31.910 12:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.910 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:18:31.910 ************************************ 00:18:31.910 START TEST raid_state_function_test 00:18:31.910 ************************************ 00:18:31.910 12:02:37 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=126162 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:31.910 Process raid pid: 126162 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126162' 00:18:31.910 12:02:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126162 /var/tmp/spdk-raid.sock 00:18:31.910 12:02:37 -- common/autotest_common.sh@829 -- # '[' -z 126162 ']' 00:18:31.910 12:02:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:31.910 12:02:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:31.910 12:02:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:31.910 12:02:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.910 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:18:31.910 [2024-11-29 12:02:37.356603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:31.911 [2024-11-29 12:02:37.356844] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.168 [2024-11-29 12:02:37.503715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.168 [2024-11-29 12:02:37.593001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.169 [2024-11-29 12:02:37.651298] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.104 12:02:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.104 12:02:38 -- common/autotest_common.sh@862 -- # return 0 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:33.104 [2024-11-29 12:02:38.548866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.104 [2024-11-29 12:02:38.549172] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.104 [2024-11-29 12:02:38.549305] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.104 [2024-11-29 12:02:38.549445] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.104 [2024-11-29 12:02:38.549547] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.104 [2024-11-29 12:02:38.549697] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.104 12:02:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.363 12:02:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.363 "name": "Existed_Raid", 00:18:33.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.363 "strip_size_kb": 64, 00:18:33.363 "state": "configuring", 00:18:33.363 "raid_level": "raid0", 00:18:33.363 "superblock": false, 00:18:33.363 "num_base_bdevs": 3, 00:18:33.363 "num_base_bdevs_discovered": 0, 00:18:33.363 "num_base_bdevs_operational": 3, 00:18:33.363 "base_bdevs_list": [ 00:18:33.363 { 00:18:33.363 "name": "BaseBdev1", 00:18:33.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.363 "is_configured": false, 00:18:33.363 "data_offset": 0, 00:18:33.363 "data_size": 0 00:18:33.363 }, 00:18:33.363 { 00:18:33.363 "name": "BaseBdev2", 00:18:33.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.363 "is_configured": false, 00:18:33.363 "data_offset": 0, 00:18:33.363 "data_size": 0 00:18:33.363 }, 00:18:33.363 { 00:18:33.363 "name": "BaseBdev3", 00:18:33.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.363 "is_configured": false, 00:18:33.363 "data_offset": 0, 00:18:33.363 "data_size": 0 00:18:33.363 } 00:18:33.363 ] 00:18:33.363 }' 00:18:33.363 12:02:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.363 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:18:33.930 12:02:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:34.189 [2024-11-29 12:02:39.657068] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.189 [2024-11-29 12:02:39.657384] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:34.189 12:02:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:34.447 [2024-11-29 12:02:39.937178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.447 [2024-11-29 12:02:39.938107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.447 [2024-11-29 12:02:39.938275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.447 [2024-11-29 12:02:39.938362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.447 [2024-11-29 12:02:39.938475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:34.447 [2024-11-29 12:02:39.938555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:34.447 12:02:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:34.706 [2024-11-29 12:02:40.172430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.706 BaseBdev1 00:18:34.706 12:02:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:34.706 12:02:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:34.706 12:02:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:34.706 12:02:40 -- common/autotest_common.sh@899 -- # local i 00:18:34.706 12:02:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:34.706 12:02:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:34.706 12:02:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:34.965 12:02:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.224 [ 00:18:35.224 { 00:18:35.224 "name": "BaseBdev1", 00:18:35.224 "aliases": [ 00:18:35.224 "c5851b3c-fba3-4df9-962b-7ed3b5060155" 00:18:35.224 ], 00:18:35.224 "product_name": "Malloc disk", 00:18:35.224 "block_size": 512, 00:18:35.224 "num_blocks": 65536, 00:18:35.224 "uuid": "c5851b3c-fba3-4df9-962b-7ed3b5060155", 00:18:35.224 "assigned_rate_limits": { 00:18:35.224 "rw_ios_per_sec": 0, 00:18:35.224 "rw_mbytes_per_sec": 0, 00:18:35.224 "r_mbytes_per_sec": 0, 00:18:35.224 "w_mbytes_per_sec": 0 00:18:35.224 }, 00:18:35.224 "claimed": true, 00:18:35.224 "claim_type": "exclusive_write", 00:18:35.224 "zoned": false, 00:18:35.224 "supported_io_types": { 00:18:35.224 "read": true, 00:18:35.224 "write": true, 00:18:35.224 "unmap": true, 00:18:35.224 "write_zeroes": true, 00:18:35.224 "flush": true, 00:18:35.224 "reset": true, 00:18:35.224 "compare": false, 00:18:35.224 "compare_and_write": false, 00:18:35.224 "abort": true, 00:18:35.224 "nvme_admin": false, 00:18:35.224 "nvme_io": false 00:18:35.224 }, 00:18:35.224 "memory_domains": [ 00:18:35.224 { 00:18:35.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.224 "dma_device_type": 2 00:18:35.224 } 00:18:35.224 ], 00:18:35.224 "driver_specific": {} 00:18:35.224 } 00:18:35.224 ] 00:18:35.224 12:02:40 -- common/autotest_common.sh@905 -- # return 0 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.224 12:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.483 12:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.483 12:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.483 12:02:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.483 12:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.483 12:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.483 "name": "Existed_Raid", 00:18:35.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.483 "strip_size_kb": 64, 00:18:35.483 "state": "configuring", 00:18:35.483 "raid_level": "raid0", 00:18:35.483 "superblock": false, 00:18:35.483 "num_base_bdevs": 3, 00:18:35.483 "num_base_bdevs_discovered": 1, 00:18:35.483 "num_base_bdevs_operational": 3, 00:18:35.483 "base_bdevs_list": [ 00:18:35.483 { 00:18:35.483 "name": "BaseBdev1", 00:18:35.483 "uuid": "c5851b3c-fba3-4df9-962b-7ed3b5060155", 00:18:35.483 "is_configured": true, 00:18:35.483 "data_offset": 0, 00:18:35.483 "data_size": 65536 00:18:35.483 }, 00:18:35.483 { 00:18:35.483 "name": "BaseBdev2", 00:18:35.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.483 "is_configured": false, 00:18:35.483 "data_offset": 0, 00:18:35.483 "data_size": 0 00:18:35.483 }, 00:18:35.483 { 00:18:35.483 "name": "BaseBdev3", 00:18:35.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.483 "is_configured": false, 00:18:35.483 "data_offset": 0, 00:18:35.483 "data_size": 0 00:18:35.483 } 00:18:35.483 ] 00:18:35.483 }' 00:18:35.483 12:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.483 12:02:40 -- common/autotest_common.sh@10 -- # set +x 00:18:36.418 12:02:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:36.418 [2024-11-29 12:02:41.865103] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.418 [2024-11-29 12:02:41.865448] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:36.418 12:02:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:36.418 12:02:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:36.676 [2024-11-29 12:02:42.089272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.676 [2024-11-29 12:02:42.091866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.676 [2024-11-29 12:02:42.092092] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.676 [2024-11-29 12:02:42.092210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:36.676 [2024-11-29 12:02:42.092282] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.676 12:02:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.934 12:02:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.934 "name": "Existed_Raid", 00:18:36.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.934 "strip_size_kb": 64, 00:18:36.934 "state": "configuring", 00:18:36.934 "raid_level": "raid0", 00:18:36.934 "superblock": false, 00:18:36.934 "num_base_bdevs": 3, 00:18:36.934 "num_base_bdevs_discovered": 1, 00:18:36.934 "num_base_bdevs_operational": 3, 00:18:36.934 "base_bdevs_list": [ 00:18:36.934 { 00:18:36.934 "name": "BaseBdev1", 00:18:36.934 "uuid": "c5851b3c-fba3-4df9-962b-7ed3b5060155", 00:18:36.934 "is_configured": true, 00:18:36.934 "data_offset": 0, 00:18:36.934 "data_size": 65536 00:18:36.934 }, 00:18:36.934 { 00:18:36.934 "name": "BaseBdev2", 00:18:36.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.934 "is_configured": false, 00:18:36.934 "data_offset": 0, 00:18:36.934 "data_size": 0 00:18:36.934 }, 00:18:36.934 { 00:18:36.934 "name": "BaseBdev3", 00:18:36.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.934 "is_configured": false, 00:18:36.934 "data_offset": 0, 00:18:36.934 "data_size": 0 00:18:36.934 } 00:18:36.934 ] 00:18:36.934 }' 00:18:36.934 12:02:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.934 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:18:37.868 12:02:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:37.868 [2024-11-29 12:02:43.334853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.868 BaseBdev2 00:18:37.868 12:02:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:37.868 12:02:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:37.868 12:02:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:37.868 12:02:43 -- common/autotest_common.sh@899 -- # local i 00:18:37.868 12:02:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:37.868 12:02:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:37.868 12:02:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.126 12:02:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:38.693 [ 00:18:38.693 { 00:18:38.693 "name": "BaseBdev2", 00:18:38.693 "aliases": [ 00:18:38.693 "3d0727c1-f4e8-4082-93a0-47e1d249e6c6" 00:18:38.693 ], 00:18:38.693 "product_name": "Malloc disk", 00:18:38.693 "block_size": 512, 00:18:38.693 "num_blocks": 65536, 00:18:38.693 "uuid": "3d0727c1-f4e8-4082-93a0-47e1d249e6c6", 00:18:38.693 "assigned_rate_limits": { 00:18:38.693 "rw_ios_per_sec": 0, 00:18:38.693 "rw_mbytes_per_sec": 0, 00:18:38.693 "r_mbytes_per_sec": 0, 00:18:38.693 "w_mbytes_per_sec": 0 00:18:38.693 }, 00:18:38.693 "claimed": true, 00:18:38.693 "claim_type": "exclusive_write", 00:18:38.693 "zoned": false, 00:18:38.693 "supported_io_types": { 00:18:38.693 "read": true, 00:18:38.693 "write": true, 00:18:38.693 "unmap": true, 00:18:38.693 "write_zeroes": true, 00:18:38.693 "flush": true, 00:18:38.693 "reset": true, 00:18:38.693 "compare": false, 00:18:38.693 "compare_and_write": false, 00:18:38.693 "abort": true, 00:18:38.693 "nvme_admin": false, 00:18:38.693 "nvme_io": false 00:18:38.693 }, 00:18:38.693 "memory_domains": [ 00:18:38.693 { 00:18:38.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.693 "dma_device_type": 2 00:18:38.693 } 00:18:38.693 ], 00:18:38.693 "driver_specific": {} 00:18:38.693 } 00:18:38.693 ] 00:18:38.693 12:02:43 -- common/autotest_common.sh@905 -- # return 0 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.693 12:02:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.693 12:02:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.693 "name": "Existed_Raid", 00:18:38.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.693 "strip_size_kb": 64, 00:18:38.693 "state": "configuring", 00:18:38.693 "raid_level": "raid0", 00:18:38.693 "superblock": false, 00:18:38.693 "num_base_bdevs": 3, 00:18:38.693 "num_base_bdevs_discovered": 2, 00:18:38.693 "num_base_bdevs_operational": 3, 00:18:38.693 "base_bdevs_list": [ 00:18:38.693 { 00:18:38.693 "name": "BaseBdev1", 00:18:38.693 "uuid": "c5851b3c-fba3-4df9-962b-7ed3b5060155", 00:18:38.693 "is_configured": true, 00:18:38.693 "data_offset": 0, 00:18:38.693 "data_size": 65536 00:18:38.693 }, 00:18:38.693 { 00:18:38.693 "name": "BaseBdev2", 00:18:38.693 "uuid": "3d0727c1-f4e8-4082-93a0-47e1d249e6c6", 00:18:38.693 "is_configured": true, 00:18:38.693 "data_offset": 0, 00:18:38.693 "data_size": 65536 00:18:38.693 }, 00:18:38.693 { 00:18:38.693 "name": "BaseBdev3", 00:18:38.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.693 "is_configured": false, 00:18:38.693 "data_offset": 0, 00:18:38.693 "data_size": 0 00:18:38.693 } 00:18:38.693 ] 00:18:38.693 }' 00:18:38.693 12:02:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.693 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:18:39.629 12:02:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:39.629 [2024-11-29 12:02:45.107270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.629 [2024-11-29 12:02:45.107335] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:39.629 [2024-11-29 12:02:45.107346] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:39.629 [2024-11-29 12:02:45.107519] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:39.629 [2024-11-29 12:02:45.107996] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:39.629 [2024-11-29 12:02:45.108021] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:39.629 [2024-11-29 12:02:45.108309] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.629 BaseBdev3 00:18:39.629 12:02:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:39.629 12:02:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:39.629 12:02:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:39.629 12:02:45 -- common/autotest_common.sh@899 -- # local i 00:18:39.629 12:02:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:39.629 12:02:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:39.629 12:02:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:40.195 12:02:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:40.195 [ 00:18:40.195 { 00:18:40.195 "name": "BaseBdev3", 00:18:40.195 "aliases": [ 00:18:40.195 "98ce39a8-18d4-4015-8903-186c334ebef2" 00:18:40.195 ], 00:18:40.195 "product_name": "Malloc disk", 00:18:40.195 "block_size": 512, 00:18:40.195 "num_blocks": 65536, 00:18:40.195 "uuid": "98ce39a8-18d4-4015-8903-186c334ebef2", 00:18:40.195 "assigned_rate_limits": { 00:18:40.195 "rw_ios_per_sec": 0, 00:18:40.195 "rw_mbytes_per_sec": 0, 00:18:40.195 "r_mbytes_per_sec": 0, 00:18:40.196 "w_mbytes_per_sec": 0 00:18:40.196 }, 00:18:40.196 "claimed": true, 00:18:40.196 "claim_type": "exclusive_write", 00:18:40.196 "zoned": false, 00:18:40.196 "supported_io_types": { 00:18:40.196 "read": true, 00:18:40.196 "write": true, 00:18:40.196 "unmap": true, 00:18:40.196 "write_zeroes": true, 00:18:40.196 "flush": true, 00:18:40.196 "reset": true, 00:18:40.196 "compare": false, 00:18:40.196 "compare_and_write": false, 00:18:40.196 "abort": true, 00:18:40.196 "nvme_admin": false, 00:18:40.196 "nvme_io": false 00:18:40.196 }, 00:18:40.196 "memory_domains": [ 00:18:40.196 { 00:18:40.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.196 "dma_device_type": 2 00:18:40.196 } 00:18:40.196 ], 00:18:40.196 "driver_specific": {} 00:18:40.196 } 00:18:40.196 ] 00:18:40.196 12:02:45 -- common/autotest_common.sh@905 -- # return 0 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.196 12:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.454 12:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.454 "name": "Existed_Raid", 00:18:40.454 "uuid": "ca0d72ff-fa57-4cf6-9e49-4bea829abd70", 00:18:40.454 "strip_size_kb": 64, 00:18:40.454 "state": "online", 00:18:40.454 "raid_level": "raid0", 00:18:40.454 "superblock": false, 00:18:40.454 "num_base_bdevs": 3, 00:18:40.454 "num_base_bdevs_discovered": 3, 00:18:40.454 "num_base_bdevs_operational": 3, 00:18:40.454 "base_bdevs_list": [ 00:18:40.454 { 00:18:40.454 "name": "BaseBdev1", 00:18:40.454 "uuid": "c5851b3c-fba3-4df9-962b-7ed3b5060155", 00:18:40.454 "is_configured": true, 00:18:40.454 "data_offset": 0, 00:18:40.454 "data_size": 65536 00:18:40.454 }, 00:18:40.454 { 00:18:40.454 "name": "BaseBdev2", 00:18:40.454 "uuid": "3d0727c1-f4e8-4082-93a0-47e1d249e6c6", 00:18:40.454 "is_configured": true, 00:18:40.454 "data_offset": 0, 00:18:40.454 "data_size": 65536 00:18:40.454 }, 00:18:40.454 { 00:18:40.454 "name": "BaseBdev3", 00:18:40.454 "uuid": "98ce39a8-18d4-4015-8903-186c334ebef2", 00:18:40.454 "is_configured": true, 00:18:40.454 "data_offset": 0, 00:18:40.454 "data_size": 65536 00:18:40.454 } 00:18:40.454 ] 00:18:40.454 }' 00:18:40.454 12:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.454 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:18:41.019 12:02:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:41.278 [2024-11-29 12:02:46.787285] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.278 [2024-11-29 12:02:46.787355] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.278 [2024-11-29 12:02:46.787455] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.566 12:02:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.837 12:02:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.837 "name": "Existed_Raid", 00:18:41.837 "uuid": "ca0d72ff-fa57-4cf6-9e49-4bea829abd70", 00:18:41.837 "strip_size_kb": 64, 00:18:41.837 "state": "offline", 00:18:41.837 "raid_level": "raid0", 00:18:41.837 "superblock": false, 00:18:41.837 "num_base_bdevs": 3, 00:18:41.837 "num_base_bdevs_discovered": 2, 00:18:41.837 "num_base_bdevs_operational": 2, 00:18:41.837 "base_bdevs_list": [ 00:18:41.837 { 00:18:41.837 "name": null, 00:18:41.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.837 "is_configured": false, 00:18:41.837 "data_offset": 0, 00:18:41.837 "data_size": 65536 00:18:41.837 }, 00:18:41.837 { 00:18:41.837 "name": "BaseBdev2", 00:18:41.837 "uuid": "3d0727c1-f4e8-4082-93a0-47e1d249e6c6", 00:18:41.837 "is_configured": true, 00:18:41.837 "data_offset": 0, 00:18:41.837 "data_size": 65536 00:18:41.837 }, 00:18:41.837 { 00:18:41.837 "name": "BaseBdev3", 00:18:41.837 "uuid": "98ce39a8-18d4-4015-8903-186c334ebef2", 00:18:41.837 "is_configured": true, 00:18:41.837 "data_offset": 0, 00:18:41.837 "data_size": 65536 00:18:41.837 } 00:18:41.837 ] 00:18:41.837 }' 00:18:41.837 12:02:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.837 12:02:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.405 12:02:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:42.405 12:02:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.405 12:02:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.405 12:02:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.664 12:02:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.664 12:02:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.664 12:02:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:42.923 [2024-11-29 12:02:48.266514] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.923 12:02:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:42.923 12:02:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.923 12:02:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.923 12:02:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.182 12:02:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:43.182 12:02:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.182 12:02:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:43.440 [2024-11-29 12:02:48.790087] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.440 [2024-11-29 12:02:48.790189] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:43.440 12:02:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.440 12:02:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.440 12:02:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.440 12:02:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.699 12:02:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:43.699 12:02:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:43.699 12:02:49 -- bdev/bdev_raid.sh@287 -- # killprocess 126162 00:18:43.699 12:02:49 -- common/autotest_common.sh@936 -- # '[' -z 126162 ']' 00:18:43.699 12:02:49 -- common/autotest_common.sh@940 -- # kill -0 126162 00:18:43.699 12:02:49 -- common/autotest_common.sh@941 -- # uname 00:18:43.699 12:02:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.699 12:02:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126162 00:18:43.699 12:02:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:43.699 12:02:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:43.699 12:02:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126162' 00:18:43.699 killing process with pid 126162 00:18:43.699 12:02:49 -- common/autotest_common.sh@955 -- # kill 126162 00:18:43.699 [2024-11-29 12:02:49.107040] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.699 [2024-11-29 12:02:49.107169] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.699 12:02:49 -- common/autotest_common.sh@960 -- # wait 126162 00:18:43.956 12:02:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:43.956 00:18:43.956 real 0m12.129s 00:18:43.956 user 0m22.030s 00:18:43.956 sys 0m1.662s 00:18:43.956 ************************************ 00:18:43.956 END TEST raid_state_function_test 00:18:43.956 ************************************ 00:18:43.956 12:02:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:43.956 12:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:18:44.213 12:02:49 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:44.213 12:02:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.213 12:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.213 ************************************ 00:18:44.213 START TEST raid_state_function_test_sb 00:18:44.213 ************************************ 00:18:44.213 12:02:49 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.213 12:02:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=126545 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126545' 00:18:44.214 Process raid pid: 126545 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126545 /var/tmp/spdk-raid.sock 00:18:44.214 12:02:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:44.214 12:02:49 -- common/autotest_common.sh@829 -- # '[' -z 126545 ']' 00:18:44.214 12:02:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:44.214 12:02:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.214 12:02:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:44.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:44.214 12:02:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.214 12:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.214 [2024-11-29 12:02:49.544988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:44.214 [2024-11-29 12:02:49.545203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.214 [2024-11-29 12:02:49.686752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.472 [2024-11-29 12:02:49.774219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.472 [2024-11-29 12:02:49.828164] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.040 12:02:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.040 12:02:50 -- common/autotest_common.sh@862 -- # return 0 00:18:45.040 12:02:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:45.298 [2024-11-29 12:02:50.788316] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:45.298 [2024-11-29 12:02:50.788646] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:45.298 [2024-11-29 12:02:50.788812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.298 [2024-11-29 12:02:50.788909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.298 [2024-11-29 12:02:50.788950] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:45.298 [2024-11-29 12:02:50.789117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.298 12:02:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.866 12:02:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.866 "name": "Existed_Raid", 00:18:45.866 "uuid": "539f03f2-692c-4669-8c48-b3d5a03e6160", 00:18:45.866 "strip_size_kb": 64, 00:18:45.866 "state": "configuring", 00:18:45.866 "raid_level": "raid0", 00:18:45.866 "superblock": true, 00:18:45.866 "num_base_bdevs": 3, 00:18:45.866 "num_base_bdevs_discovered": 0, 00:18:45.866 "num_base_bdevs_operational": 3, 00:18:45.866 "base_bdevs_list": [ 00:18:45.866 { 00:18:45.866 "name": "BaseBdev1", 00:18:45.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.866 "is_configured": false, 00:18:45.866 "data_offset": 0, 00:18:45.866 "data_size": 0 00:18:45.866 }, 00:18:45.866 { 00:18:45.866 "name": "BaseBdev2", 00:18:45.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.866 "is_configured": false, 00:18:45.866 "data_offset": 0, 00:18:45.866 "data_size": 0 00:18:45.866 }, 00:18:45.866 { 00:18:45.866 "name": "BaseBdev3", 00:18:45.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.866 "is_configured": false, 00:18:45.866 "data_offset": 0, 00:18:45.866 "data_size": 0 00:18:45.866 } 00:18:45.866 ] 00:18:45.866 }' 00:18:45.866 12:02:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.866 12:02:51 -- common/autotest_common.sh@10 -- # set +x 00:18:46.432 12:02:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:46.690 [2024-11-29 12:02:52.032424] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.690 [2024-11-29 12:02:52.032741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:46.690 12:02:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:46.949 [2024-11-29 12:02:52.272566] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.949 [2024-11-29 12:02:52.272864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.949 [2024-11-29 12:02:52.273018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.949 [2024-11-29 12:02:52.273091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.949 [2024-11-29 12:02:52.273204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.949 [2024-11-29 12:02:52.273273] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.949 12:02:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.208 [2024-11-29 12:02:52.536181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.208 BaseBdev1 00:18:47.208 12:02:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:47.208 12:02:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:47.208 12:02:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:47.208 12:02:52 -- common/autotest_common.sh@899 -- # local i 00:18:47.208 12:02:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:47.208 12:02:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:47.208 12:02:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.466 12:02:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:47.724 [ 00:18:47.724 { 00:18:47.724 "name": "BaseBdev1", 00:18:47.724 "aliases": [ 00:18:47.724 "41bf6393-a9d4-466e-bed0-efa4750b73ba" 00:18:47.724 ], 00:18:47.724 "product_name": "Malloc disk", 00:18:47.724 "block_size": 512, 00:18:47.724 "num_blocks": 65536, 00:18:47.724 "uuid": "41bf6393-a9d4-466e-bed0-efa4750b73ba", 00:18:47.724 "assigned_rate_limits": { 00:18:47.724 "rw_ios_per_sec": 0, 00:18:47.724 "rw_mbytes_per_sec": 0, 00:18:47.724 "r_mbytes_per_sec": 0, 00:18:47.724 "w_mbytes_per_sec": 0 00:18:47.724 }, 00:18:47.724 "claimed": true, 00:18:47.724 "claim_type": "exclusive_write", 00:18:47.724 "zoned": false, 00:18:47.724 "supported_io_types": { 00:18:47.724 "read": true, 00:18:47.724 "write": true, 00:18:47.724 "unmap": true, 00:18:47.724 "write_zeroes": true, 00:18:47.724 "flush": true, 00:18:47.724 "reset": true, 00:18:47.724 "compare": false, 00:18:47.724 "compare_and_write": false, 00:18:47.724 "abort": true, 00:18:47.724 "nvme_admin": false, 00:18:47.724 "nvme_io": false 00:18:47.724 }, 00:18:47.724 "memory_domains": [ 00:18:47.724 { 00:18:47.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.724 "dma_device_type": 2 00:18:47.724 } 00:18:47.724 ], 00:18:47.724 "driver_specific": {} 00:18:47.724 } 00:18:47.724 ] 00:18:47.724 12:02:53 -- common/autotest_common.sh@905 -- # return 0 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.724 12:02:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.995 12:02:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.995 "name": "Existed_Raid", 00:18:47.995 "uuid": "9dedf189-b150-470d-9808-665276ba5f43", 00:18:47.995 "strip_size_kb": 64, 00:18:47.995 "state": "configuring", 00:18:47.995 "raid_level": "raid0", 00:18:47.995 "superblock": true, 00:18:47.995 "num_base_bdevs": 3, 00:18:47.995 "num_base_bdevs_discovered": 1, 00:18:47.995 "num_base_bdevs_operational": 3, 00:18:47.995 "base_bdevs_list": [ 00:18:47.995 { 00:18:47.995 "name": "BaseBdev1", 00:18:47.995 "uuid": "41bf6393-a9d4-466e-bed0-efa4750b73ba", 00:18:47.995 "is_configured": true, 00:18:47.995 "data_offset": 2048, 00:18:47.995 "data_size": 63488 00:18:47.995 }, 00:18:47.995 { 00:18:47.995 "name": "BaseBdev2", 00:18:47.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.995 "is_configured": false, 00:18:47.995 "data_offset": 0, 00:18:47.995 "data_size": 0 00:18:47.995 }, 00:18:47.995 { 00:18:47.995 "name": "BaseBdev3", 00:18:47.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.995 "is_configured": false, 00:18:47.995 "data_offset": 0, 00:18:47.995 "data_size": 0 00:18:47.995 } 00:18:47.995 ] 00:18:47.995 }' 00:18:47.995 12:02:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.995 12:02:53 -- common/autotest_common.sh@10 -- # set +x 00:18:48.600 12:02:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:48.859 [2024-11-29 12:02:54.196608] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.859 [2024-11-29 12:02:54.197076] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:48.859 12:02:54 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:48.859 12:02:54 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.117 12:02:54 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.376 BaseBdev1 00:18:49.376 12:02:54 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:49.376 12:02:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:49.376 12:02:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:49.376 12:02:54 -- common/autotest_common.sh@899 -- # local i 00:18:49.376 12:02:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:49.376 12:02:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:49.376 12:02:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.634 12:02:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:49.891 [ 00:18:49.891 { 00:18:49.891 "name": "BaseBdev1", 00:18:49.891 "aliases": [ 00:18:49.891 "c96ae478-c973-4091-9f68-d4f01d2fc60b" 00:18:49.891 ], 00:18:49.891 "product_name": "Malloc disk", 00:18:49.891 "block_size": 512, 00:18:49.891 "num_blocks": 65536, 00:18:49.891 "uuid": "c96ae478-c973-4091-9f68-d4f01d2fc60b", 00:18:49.891 "assigned_rate_limits": { 00:18:49.891 "rw_ios_per_sec": 0, 00:18:49.891 "rw_mbytes_per_sec": 0, 00:18:49.891 "r_mbytes_per_sec": 0, 00:18:49.891 "w_mbytes_per_sec": 0 00:18:49.891 }, 00:18:49.891 "claimed": false, 00:18:49.891 "zoned": false, 00:18:49.891 "supported_io_types": { 00:18:49.891 "read": true, 00:18:49.891 "write": true, 00:18:49.891 "unmap": true, 00:18:49.891 "write_zeroes": true, 00:18:49.891 "flush": true, 00:18:49.891 "reset": true, 00:18:49.891 "compare": false, 00:18:49.891 "compare_and_write": false, 00:18:49.891 "abort": true, 00:18:49.891 "nvme_admin": false, 00:18:49.891 "nvme_io": false 00:18:49.891 }, 00:18:49.891 "memory_domains": [ 00:18:49.891 { 00:18:49.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.891 "dma_device_type": 2 00:18:49.891 } 00:18:49.891 ], 00:18:49.891 "driver_specific": {} 00:18:49.891 } 00:18:49.891 ] 00:18:49.891 12:02:55 -- common/autotest_common.sh@905 -- # return 0 00:18:49.891 12:02:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:50.149 [2024-11-29 12:02:55.504535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.149 [2024-11-29 12:02:55.507154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.149 [2024-11-29 12:02:55.507360] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.149 [2024-11-29 12:02:55.507482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.149 [2024-11-29 12:02:55.507555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.149 12:02:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.407 12:02:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.407 "name": "Existed_Raid", 00:18:50.407 "uuid": "0a600cd0-1846-43c0-b5bf-d39b0422aa28", 00:18:50.407 "strip_size_kb": 64, 00:18:50.407 "state": "configuring", 00:18:50.407 "raid_level": "raid0", 00:18:50.407 "superblock": true, 00:18:50.407 "num_base_bdevs": 3, 00:18:50.407 "num_base_bdevs_discovered": 1, 00:18:50.407 "num_base_bdevs_operational": 3, 00:18:50.407 "base_bdevs_list": [ 00:18:50.407 { 00:18:50.407 "name": "BaseBdev1", 00:18:50.407 "uuid": "c96ae478-c973-4091-9f68-d4f01d2fc60b", 00:18:50.407 "is_configured": true, 00:18:50.407 "data_offset": 2048, 00:18:50.407 "data_size": 63488 00:18:50.407 }, 00:18:50.407 { 00:18:50.407 "name": "BaseBdev2", 00:18:50.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.407 "is_configured": false, 00:18:50.407 "data_offset": 0, 00:18:50.407 "data_size": 0 00:18:50.407 }, 00:18:50.407 { 00:18:50.407 "name": "BaseBdev3", 00:18:50.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.407 "is_configured": false, 00:18:50.407 "data_offset": 0, 00:18:50.407 "data_size": 0 00:18:50.407 } 00:18:50.407 ] 00:18:50.407 }' 00:18:50.407 12:02:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.407 12:02:55 -- common/autotest_common.sh@10 -- # set +x 00:18:50.974 12:02:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.232 [2024-11-29 12:02:56.706526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.232 BaseBdev2 00:18:51.232 12:02:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:51.232 12:02:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:51.232 12:02:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:51.232 12:02:56 -- common/autotest_common.sh@899 -- # local i 00:18:51.232 12:02:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:51.232 12:02:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:51.232 12:02:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.492 12:02:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.751 [ 00:18:51.751 { 00:18:51.751 "name": "BaseBdev2", 00:18:51.751 "aliases": [ 00:18:51.751 "ebdcd89a-45af-4d40-95ab-90a43627b059" 00:18:51.751 ], 00:18:51.751 "product_name": "Malloc disk", 00:18:51.751 "block_size": 512, 00:18:51.751 "num_blocks": 65536, 00:18:51.751 "uuid": "ebdcd89a-45af-4d40-95ab-90a43627b059", 00:18:51.751 "assigned_rate_limits": { 00:18:51.751 "rw_ios_per_sec": 0, 00:18:51.751 "rw_mbytes_per_sec": 0, 00:18:51.751 "r_mbytes_per_sec": 0, 00:18:51.751 "w_mbytes_per_sec": 0 00:18:51.751 }, 00:18:51.751 "claimed": true, 00:18:51.751 "claim_type": "exclusive_write", 00:18:51.751 "zoned": false, 00:18:51.751 "supported_io_types": { 00:18:51.751 "read": true, 00:18:51.751 "write": true, 00:18:51.751 "unmap": true, 00:18:51.751 "write_zeroes": true, 00:18:51.751 "flush": true, 00:18:51.751 "reset": true, 00:18:51.751 "compare": false, 00:18:51.751 "compare_and_write": false, 00:18:51.751 "abort": true, 00:18:51.751 "nvme_admin": false, 00:18:51.751 "nvme_io": false 00:18:51.751 }, 00:18:51.751 "memory_domains": [ 00:18:51.751 { 00:18:51.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.751 "dma_device_type": 2 00:18:51.751 } 00:18:51.751 ], 00:18:51.751 "driver_specific": {} 00:18:51.751 } 00:18:51.751 ] 00:18:51.751 12:02:57 -- common/autotest_common.sh@905 -- # return 0 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.751 12:02:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.009 12:02:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.009 "name": "Existed_Raid", 00:18:52.009 "uuid": "0a600cd0-1846-43c0-b5bf-d39b0422aa28", 00:18:52.009 "strip_size_kb": 64, 00:18:52.009 "state": "configuring", 00:18:52.009 "raid_level": "raid0", 00:18:52.009 "superblock": true, 00:18:52.009 "num_base_bdevs": 3, 00:18:52.009 "num_base_bdevs_discovered": 2, 00:18:52.009 "num_base_bdevs_operational": 3, 00:18:52.009 "base_bdevs_list": [ 00:18:52.009 { 00:18:52.009 "name": "BaseBdev1", 00:18:52.009 "uuid": "c96ae478-c973-4091-9f68-d4f01d2fc60b", 00:18:52.009 "is_configured": true, 00:18:52.009 "data_offset": 2048, 00:18:52.009 "data_size": 63488 00:18:52.009 }, 00:18:52.009 { 00:18:52.009 "name": "BaseBdev2", 00:18:52.009 "uuid": "ebdcd89a-45af-4d40-95ab-90a43627b059", 00:18:52.009 "is_configured": true, 00:18:52.009 "data_offset": 2048, 00:18:52.009 "data_size": 63488 00:18:52.009 }, 00:18:52.009 { 00:18:52.009 "name": "BaseBdev3", 00:18:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.009 "is_configured": false, 00:18:52.009 "data_offset": 0, 00:18:52.009 "data_size": 0 00:18:52.009 } 00:18:52.009 ] 00:18:52.009 }' 00:18:52.009 12:02:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.009 12:02:57 -- common/autotest_common.sh@10 -- # set +x 00:18:52.944 12:02:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.944 [2024-11-29 12:02:58.380041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.944 [2024-11-29 12:02:58.380465] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:18:52.944 [2024-11-29 12:02:58.380604] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:52.944 [2024-11-29 12:02:58.380785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:52.944 [2024-11-29 12:02:58.381278] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:18:52.944 [2024-11-29 12:02:58.381412] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:18:52.944 BaseBdev3 00:18:52.944 [2024-11-29 12:02:58.381698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.944 12:02:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:52.944 12:02:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:52.944 12:02:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:52.944 12:02:58 -- common/autotest_common.sh@899 -- # local i 00:18:52.944 12:02:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:52.944 12:02:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:52.944 12:02:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.202 12:02:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:53.459 [ 00:18:53.459 { 00:18:53.459 "name": "BaseBdev3", 00:18:53.459 "aliases": [ 00:18:53.460 "98e3c833-c476-4304-a7eb-7e1e6607fd36" 00:18:53.460 ], 00:18:53.460 "product_name": "Malloc disk", 00:18:53.460 "block_size": 512, 00:18:53.460 "num_blocks": 65536, 00:18:53.460 "uuid": "98e3c833-c476-4304-a7eb-7e1e6607fd36", 00:18:53.460 "assigned_rate_limits": { 00:18:53.460 "rw_ios_per_sec": 0, 00:18:53.460 "rw_mbytes_per_sec": 0, 00:18:53.460 "r_mbytes_per_sec": 0, 00:18:53.460 "w_mbytes_per_sec": 0 00:18:53.460 }, 00:18:53.460 "claimed": true, 00:18:53.460 "claim_type": "exclusive_write", 00:18:53.460 "zoned": false, 00:18:53.460 "supported_io_types": { 00:18:53.460 "read": true, 00:18:53.460 "write": true, 00:18:53.460 "unmap": true, 00:18:53.460 "write_zeroes": true, 00:18:53.460 "flush": true, 00:18:53.460 "reset": true, 00:18:53.460 "compare": false, 00:18:53.460 "compare_and_write": false, 00:18:53.460 "abort": true, 00:18:53.460 "nvme_admin": false, 00:18:53.460 "nvme_io": false 00:18:53.460 }, 00:18:53.460 "memory_domains": [ 00:18:53.460 { 00:18:53.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.460 "dma_device_type": 2 00:18:53.460 } 00:18:53.460 ], 00:18:53.460 "driver_specific": {} 00:18:53.460 } 00:18:53.460 ] 00:18:53.460 12:02:58 -- common/autotest_common.sh@905 -- # return 0 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.460 12:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.718 12:02:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.718 "name": "Existed_Raid", 00:18:53.718 "uuid": "0a600cd0-1846-43c0-b5bf-d39b0422aa28", 00:18:53.718 "strip_size_kb": 64, 00:18:53.718 "state": "online", 00:18:53.718 "raid_level": "raid0", 00:18:53.718 "superblock": true, 00:18:53.718 "num_base_bdevs": 3, 00:18:53.718 "num_base_bdevs_discovered": 3, 00:18:53.718 "num_base_bdevs_operational": 3, 00:18:53.718 "base_bdevs_list": [ 00:18:53.718 { 00:18:53.718 "name": "BaseBdev1", 00:18:53.718 "uuid": "c96ae478-c973-4091-9f68-d4f01d2fc60b", 00:18:53.718 "is_configured": true, 00:18:53.718 "data_offset": 2048, 00:18:53.718 "data_size": 63488 00:18:53.718 }, 00:18:53.718 { 00:18:53.718 "name": "BaseBdev2", 00:18:53.718 "uuid": "ebdcd89a-45af-4d40-95ab-90a43627b059", 00:18:53.718 "is_configured": true, 00:18:53.718 "data_offset": 2048, 00:18:53.718 "data_size": 63488 00:18:53.718 }, 00:18:53.718 { 00:18:53.718 "name": "BaseBdev3", 00:18:53.718 "uuid": "98e3c833-c476-4304-a7eb-7e1e6607fd36", 00:18:53.718 "is_configured": true, 00:18:53.718 "data_offset": 2048, 00:18:53.718 "data_size": 63488 00:18:53.718 } 00:18:53.718 ] 00:18:53.718 }' 00:18:53.718 12:02:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.718 12:02:59 -- common/autotest_common.sh@10 -- # set +x 00:18:54.653 12:02:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:54.653 [2024-11-29 12:03:00.071263] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.653 [2024-11-29 12:03:00.071610] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.653 [2024-11-29 12:03:00.071801] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.653 12:03:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.911 12:03:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.911 "name": "Existed_Raid", 00:18:54.911 "uuid": "0a600cd0-1846-43c0-b5bf-d39b0422aa28", 00:18:54.911 "strip_size_kb": 64, 00:18:54.911 "state": "offline", 00:18:54.911 "raid_level": "raid0", 00:18:54.911 "superblock": true, 00:18:54.911 "num_base_bdevs": 3, 00:18:54.911 "num_base_bdevs_discovered": 2, 00:18:54.911 "num_base_bdevs_operational": 2, 00:18:54.911 "base_bdevs_list": [ 00:18:54.911 { 00:18:54.911 "name": null, 00:18:54.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.911 "is_configured": false, 00:18:54.911 "data_offset": 2048, 00:18:54.911 "data_size": 63488 00:18:54.911 }, 00:18:54.911 { 00:18:54.911 "name": "BaseBdev2", 00:18:54.911 "uuid": "ebdcd89a-45af-4d40-95ab-90a43627b059", 00:18:54.911 "is_configured": true, 00:18:54.911 "data_offset": 2048, 00:18:54.911 "data_size": 63488 00:18:54.911 }, 00:18:54.911 { 00:18:54.911 "name": "BaseBdev3", 00:18:54.911 "uuid": "98e3c833-c476-4304-a7eb-7e1e6607fd36", 00:18:54.911 "is_configured": true, 00:18:54.911 "data_offset": 2048, 00:18:54.911 "data_size": 63488 00:18:54.911 } 00:18:54.911 ] 00:18:54.911 }' 00:18:54.911 12:03:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.911 12:03:00 -- common/autotest_common.sh@10 -- # set +x 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.846 12:03:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:56.103 [2024-11-29 12:03:01.596130] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.361 12:03:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:56.361 12:03:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.361 12:03:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.361 12:03:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:56.620 12:03:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:56.620 12:03:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.620 12:03:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:56.878 [2024-11-29 12:03:02.180447] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:56.878 [2024-11-29 12:03:02.180727] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:18:56.878 12:03:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:56.878 12:03:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.878 12:03:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:56.878 12:03:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.134 12:03:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:57.135 12:03:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:57.135 12:03:02 -- bdev/bdev_raid.sh@287 -- # killprocess 126545 00:18:57.135 12:03:02 -- common/autotest_common.sh@936 -- # '[' -z 126545 ']' 00:18:57.135 12:03:02 -- common/autotest_common.sh@940 -- # kill -0 126545 00:18:57.135 12:03:02 -- common/autotest_common.sh@941 -- # uname 00:18:57.135 12:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.135 12:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126545 00:18:57.135 12:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:57.135 12:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:57.135 12:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126545' 00:18:57.135 killing process with pid 126545 00:18:57.135 12:03:02 -- common/autotest_common.sh@955 -- # kill 126545 00:18:57.135 [2024-11-29 12:03:02.529815] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.135 12:03:02 -- common/autotest_common.sh@960 -- # wait 126545 00:18:57.135 [2024-11-29 12:03:02.530079] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:57.393 00:18:57.393 real 0m13.288s 00:18:57.393 user 0m24.334s 00:18:57.393 sys 0m1.726s 00:18:57.393 12:03:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:57.393 12:03:02 -- common/autotest_common.sh@10 -- # set +x 00:18:57.393 ************************************ 00:18:57.393 END TEST raid_state_function_test_sb 00:18:57.393 ************************************ 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:57.393 12:03:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:57.393 12:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.393 12:03:02 -- common/autotest_common.sh@10 -- # set +x 00:18:57.393 ************************************ 00:18:57.393 START TEST raid_superblock_test 00:18:57.393 ************************************ 00:18:57.393 12:03:02 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=126938 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126938 /var/tmp/spdk-raid.sock 00:18:57.393 12:03:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:57.393 12:03:02 -- common/autotest_common.sh@829 -- # '[' -z 126938 ']' 00:18:57.393 12:03:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:57.393 12:03:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.393 12:03:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:57.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:57.393 12:03:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.393 12:03:02 -- common/autotest_common.sh@10 -- # set +x 00:18:57.393 [2024-11-29 12:03:02.897846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:57.393 [2024-11-29 12:03:02.898854] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126938 ] 00:18:57.652 [2024-11-29 12:03:03.053738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.652 [2024-11-29 12:03:03.147907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.911 [2024-11-29 12:03:03.205053] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.478 12:03:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.478 12:03:03 -- common/autotest_common.sh@862 -- # return 0 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.478 12:03:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:58.737 malloc1 00:18:58.737 12:03:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.995 [2024-11-29 12:03:04.378966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.995 [2024-11-29 12:03:04.379421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.995 [2024-11-29 12:03:04.379517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:58.995 [2024-11-29 12:03:04.379789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.995 [2024-11-29 12:03:04.382679] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.995 [2024-11-29 12:03:04.382877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.995 pt1 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.995 12:03:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:59.253 malloc2 00:18:59.253 12:03:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:59.512 [2024-11-29 12:03:04.842849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:59.512 [2024-11-29 12:03:04.843242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.512 [2024-11-29 12:03:04.843332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:59.512 [2024-11-29 12:03:04.843754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.512 [2024-11-29 12:03:04.846772] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.512 [2024-11-29 12:03:04.847087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:59.512 pt2 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:59.512 12:03:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:59.770 malloc3 00:18:59.770 12:03:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:00.028 [2024-11-29 12:03:05.375391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:00.028 [2024-11-29 12:03:05.375764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.028 [2024-11-29 12:03:05.375856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:00.028 [2024-11-29 12:03:05.376131] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.028 [2024-11-29 12:03:05.378797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.028 [2024-11-29 12:03:05.379020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:00.028 pt3 00:19:00.028 12:03:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:00.028 12:03:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:00.028 12:03:05 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:00.287 [2024-11-29 12:03:05.603613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:00.287 [2024-11-29 12:03:05.606232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.287 [2024-11-29 12:03:05.606485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:00.287 [2024-11-29 12:03:05.606781] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:00.287 [2024-11-29 12:03:05.606905] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:00.287 [2024-11-29 12:03:05.607214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:00.287 [2024-11-29 12:03:05.607793] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:00.287 [2024-11-29 12:03:05.607930] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:19:00.287 [2024-11-29 12:03:05.608257] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.287 12:03:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.547 12:03:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.547 "name": "raid_bdev1", 00:19:00.547 "uuid": "78082417-cab5-4fc0-98b5-9f1115bb8d98", 00:19:00.547 "strip_size_kb": 64, 00:19:00.547 "state": "online", 00:19:00.547 "raid_level": "raid0", 00:19:00.547 "superblock": true, 00:19:00.547 "num_base_bdevs": 3, 00:19:00.547 "num_base_bdevs_discovered": 3, 00:19:00.547 "num_base_bdevs_operational": 3, 00:19:00.547 "base_bdevs_list": [ 00:19:00.547 { 00:19:00.547 "name": "pt1", 00:19:00.547 "uuid": "84e8216d-a8b9-5e5b-b300-445ba1a56a16", 00:19:00.547 "is_configured": true, 00:19:00.547 "data_offset": 2048, 00:19:00.547 "data_size": 63488 00:19:00.547 }, 00:19:00.547 { 00:19:00.547 "name": "pt2", 00:19:00.547 "uuid": "6d4d7376-c46b-57bc-839b-3758b8b74103", 00:19:00.547 "is_configured": true, 00:19:00.547 "data_offset": 2048, 00:19:00.547 "data_size": 63488 00:19:00.547 }, 00:19:00.547 { 00:19:00.547 "name": "pt3", 00:19:00.547 "uuid": "449ccb1b-e255-5302-8d17-4e0f6b3a767f", 00:19:00.547 "is_configured": true, 00:19:00.547 "data_offset": 2048, 00:19:00.547 "data_size": 63488 00:19:00.547 } 00:19:00.547 ] 00:19:00.547 }' 00:19:00.547 12:03:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.547 12:03:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.119 12:03:06 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:01.119 12:03:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:01.393 [2024-11-29 12:03:06.692723] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.393 12:03:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=78082417-cab5-4fc0-98b5-9f1115bb8d98 00:19:01.393 12:03:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 78082417-cab5-4fc0-98b5-9f1115bb8d98 ']' 00:19:01.393 12:03:06 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:01.652 [2024-11-29 12:03:06.964500] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.652 [2024-11-29 12:03:06.964840] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.652 [2024-11-29 12:03:06.965089] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.652 [2024-11-29 12:03:06.965299] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.652 [2024-11-29 12:03:06.965422] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:19:01.653 12:03:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.653 12:03:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:01.911 12:03:07 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:01.911 12:03:07 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:01.911 12:03:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.911 12:03:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:02.170 12:03:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:02.170 12:03:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:02.170 12:03:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:02.170 12:03:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:02.427 12:03:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:02.427 12:03:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:02.685 12:03:08 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:02.685 12:03:08 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:02.685 12:03:08 -- common/autotest_common.sh@650 -- # local es=0 00:19:02.685 12:03:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:02.685 12:03:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.685 12:03:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.685 12:03:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.685 12:03:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.685 12:03:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.944 12:03:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.944 12:03:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.944 12:03:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:02.944 12:03:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:02.944 [2024-11-29 12:03:08.408865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:02.944 [2024-11-29 12:03:08.411522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:02.944 [2024-11-29 12:03:08.411732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:02.944 [2024-11-29 12:03:08.411840] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:02.944 [2024-11-29 12:03:08.412201] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:02.944 [2024-11-29 12:03:08.412383] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:02.944 [2024-11-29 12:03:08.412479] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.944 [2024-11-29 12:03:08.412600] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:19:02.944 request: 00:19:02.944 { 00:19:02.944 "name": "raid_bdev1", 00:19:02.944 "raid_level": "raid0", 00:19:02.944 "base_bdevs": [ 00:19:02.944 "malloc1", 00:19:02.944 "malloc2", 00:19:02.944 "malloc3" 00:19:02.944 ], 00:19:02.944 "superblock": false, 00:19:02.944 "strip_size_kb": 64, 00:19:02.944 "method": "bdev_raid_create", 00:19:02.944 "req_id": 1 00:19:02.944 } 00:19:02.944 Got JSON-RPC error response 00:19:02.944 response: 00:19:02.944 { 00:19:02.944 "code": -17, 00:19:02.944 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:02.944 } 00:19:02.944 12:03:08 -- common/autotest_common.sh@653 -- # es=1 00:19:02.944 12:03:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:02.944 12:03:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:02.944 12:03:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:02.944 12:03:08 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.944 12:03:08 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:03.510 12:03:08 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:03.510 12:03:08 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:03.510 12:03:08 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:03.510 [2024-11-29 12:03:08.989226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:03.510 [2024-11-29 12:03:08.989708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.510 [2024-11-29 12:03:08.989799] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:03.510 [2024-11-29 12:03:08.989991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.510 [2024-11-29 12:03:08.992709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.510 [2024-11-29 12:03:08.992918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:03.510 [2024-11-29 12:03:08.993177] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:03.510 [2024-11-29 12:03:08.993374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.510 pt1 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.510 12:03:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.077 12:03:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.077 "name": "raid_bdev1", 00:19:04.077 "uuid": "78082417-cab5-4fc0-98b5-9f1115bb8d98", 00:19:04.077 "strip_size_kb": 64, 00:19:04.077 "state": "configuring", 00:19:04.077 "raid_level": "raid0", 00:19:04.077 "superblock": true, 00:19:04.077 "num_base_bdevs": 3, 00:19:04.077 "num_base_bdevs_discovered": 1, 00:19:04.077 "num_base_bdevs_operational": 3, 00:19:04.077 "base_bdevs_list": [ 00:19:04.077 { 00:19:04.077 "name": "pt1", 00:19:04.077 "uuid": "84e8216d-a8b9-5e5b-b300-445ba1a56a16", 00:19:04.077 "is_configured": true, 00:19:04.077 "data_offset": 2048, 00:19:04.077 "data_size": 63488 00:19:04.077 }, 00:19:04.077 { 00:19:04.077 "name": null, 00:19:04.077 "uuid": "6d4d7376-c46b-57bc-839b-3758b8b74103", 00:19:04.077 "is_configured": false, 00:19:04.077 "data_offset": 2048, 00:19:04.077 "data_size": 63488 00:19:04.077 }, 00:19:04.077 { 00:19:04.077 "name": null, 00:19:04.077 "uuid": "449ccb1b-e255-5302-8d17-4e0f6b3a767f", 00:19:04.077 "is_configured": false, 00:19:04.077 "data_offset": 2048, 00:19:04.077 "data_size": 63488 00:19:04.077 } 00:19:04.077 ] 00:19:04.077 }' 00:19:04.077 12:03:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.077 12:03:09 -- common/autotest_common.sh@10 -- # set +x 00:19:04.644 12:03:09 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:19:04.644 12:03:09 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.903 [2024-11-29 12:03:10.193600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.903 [2024-11-29 12:03:10.194102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.903 [2024-11-29 12:03:10.194203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:04.903 [2024-11-29 12:03:10.194463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.903 [2024-11-29 12:03:10.194990] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.903 [2024-11-29 12:03:10.195157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.903 [2024-11-29 12:03:10.195388] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:04.903 [2024-11-29 12:03:10.195533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.903 pt2 00:19:04.903 12:03:10 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:05.162 [2024-11-29 12:03:10.449705] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.162 12:03:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.421 12:03:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.421 "name": "raid_bdev1", 00:19:05.421 "uuid": "78082417-cab5-4fc0-98b5-9f1115bb8d98", 00:19:05.421 "strip_size_kb": 64, 00:19:05.421 "state": "configuring", 00:19:05.421 "raid_level": "raid0", 00:19:05.421 "superblock": true, 00:19:05.421 "num_base_bdevs": 3, 00:19:05.421 "num_base_bdevs_discovered": 1, 00:19:05.421 "num_base_bdevs_operational": 3, 00:19:05.421 "base_bdevs_list": [ 00:19:05.421 { 00:19:05.421 "name": "pt1", 00:19:05.421 "uuid": "84e8216d-a8b9-5e5b-b300-445ba1a56a16", 00:19:05.421 "is_configured": true, 00:19:05.421 "data_offset": 2048, 00:19:05.421 "data_size": 63488 00:19:05.421 }, 00:19:05.421 { 00:19:05.421 "name": null, 00:19:05.421 "uuid": "6d4d7376-c46b-57bc-839b-3758b8b74103", 00:19:05.421 "is_configured": false, 00:19:05.421 "data_offset": 2048, 00:19:05.421 "data_size": 63488 00:19:05.421 }, 00:19:05.421 { 00:19:05.421 "name": null, 00:19:05.421 "uuid": "449ccb1b-e255-5302-8d17-4e0f6b3a767f", 00:19:05.421 "is_configured": false, 00:19:05.421 "data_offset": 2048, 00:19:05.421 "data_size": 63488 00:19:05.421 } 00:19:05.421 ] 00:19:05.421 }' 00:19:05.421 12:03:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.421 12:03:10 -- common/autotest_common.sh@10 -- # set +x 00:19:05.987 12:03:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:05.987 12:03:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:05.987 12:03:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.245 [2024-11-29 12:03:11.535054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.245 [2024-11-29 12:03:11.535491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.245 [2024-11-29 12:03:11.535585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:06.245 [2024-11-29 12:03:11.535769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.245 [2024-11-29 12:03:11.536395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.245 [2024-11-29 12:03:11.536584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.245 [2024-11-29 12:03:11.536865] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:06.245 [2024-11-29 12:03:11.537030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.245 pt2 00:19:06.245 12:03:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:06.245 12:03:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:06.245 12:03:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:06.503 [2024-11-29 12:03:11.759134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:06.503 [2024-11-29 12:03:11.759530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.503 [2024-11-29 12:03:11.759621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:06.503 [2024-11-29 12:03:11.759865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.503 [2024-11-29 12:03:11.760487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.503 [2024-11-29 12:03:11.760660] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:06.503 [2024-11-29 12:03:11.760949] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:06.503 [2024-11-29 12:03:11.761108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:06.503 [2024-11-29 12:03:11.761317] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:06.503 [2024-11-29 12:03:11.761428] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:06.503 [2024-11-29 12:03:11.761628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:19:06.503 [2024-11-29 12:03:11.762117] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:06.503 [2024-11-29 12:03:11.762235] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:06.503 [2024-11-29 12:03:11.762499] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.503 pt3 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.503 12:03:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.503 12:03:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.503 "name": "raid_bdev1", 00:19:06.503 "uuid": "78082417-cab5-4fc0-98b5-9f1115bb8d98", 00:19:06.503 "strip_size_kb": 64, 00:19:06.503 "state": "online", 00:19:06.503 "raid_level": "raid0", 00:19:06.503 "superblock": true, 00:19:06.503 "num_base_bdevs": 3, 00:19:06.503 "num_base_bdevs_discovered": 3, 00:19:06.503 "num_base_bdevs_operational": 3, 00:19:06.503 "base_bdevs_list": [ 00:19:06.503 { 00:19:06.503 "name": "pt1", 00:19:06.503 "uuid": "84e8216d-a8b9-5e5b-b300-445ba1a56a16", 00:19:06.503 "is_configured": true, 00:19:06.503 "data_offset": 2048, 00:19:06.503 "data_size": 63488 00:19:06.503 }, 00:19:06.503 { 00:19:06.503 "name": "pt2", 00:19:06.503 "uuid": "6d4d7376-c46b-57bc-839b-3758b8b74103", 00:19:06.503 "is_configured": true, 00:19:06.503 "data_offset": 2048, 00:19:06.503 "data_size": 63488 00:19:06.503 }, 00:19:06.503 { 00:19:06.503 "name": "pt3", 00:19:06.503 "uuid": "449ccb1b-e255-5302-8d17-4e0f6b3a767f", 00:19:06.503 "is_configured": true, 00:19:06.503 "data_offset": 2048, 00:19:06.503 "data_size": 63488 00:19:06.503 } 00:19:06.503 ] 00:19:06.503 }' 00:19:06.503 12:03:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.503 12:03:12 -- common/autotest_common.sh@10 -- # set +x 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:07.440 [2024-11-29 12:03:12.883651] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@430 -- # '[' 78082417-cab5-4fc0-98b5-9f1115bb8d98 '!=' 78082417-cab5-4fc0-98b5-9f1115bb8d98 ']' 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:07.440 12:03:12 -- bdev/bdev_raid.sh@511 -- # killprocess 126938 00:19:07.440 12:03:12 -- common/autotest_common.sh@936 -- # '[' -z 126938 ']' 00:19:07.440 12:03:12 -- common/autotest_common.sh@940 -- # kill -0 126938 00:19:07.440 12:03:12 -- common/autotest_common.sh@941 -- # uname 00:19:07.440 12:03:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.440 12:03:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126938 00:19:07.440 12:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:07.440 12:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:07.440 12:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126938' 00:19:07.440 killing process with pid 126938 00:19:07.440 12:03:12 -- common/autotest_common.sh@955 -- # kill 126938 00:19:07.440 12:03:12 -- common/autotest_common.sh@960 -- # wait 126938 00:19:07.440 [2024-11-29 12:03:12.931735] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.440 [2024-11-29 12:03:12.931847] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.440 [2024-11-29 12:03:12.931916] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.440 [2024-11-29 12:03:12.932165] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:07.700 [2024-11-29 12:03:12.970807] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:07.958 00:19:07.958 real 0m10.453s 00:19:07.958 user 0m18.985s 00:19:07.958 sys 0m1.329s 00:19:07.958 12:03:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:07.958 12:03:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.958 ************************************ 00:19:07.958 END TEST raid_superblock_test 00:19:07.958 ************************************ 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:19:07.958 12:03:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:07.958 12:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.958 12:03:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.958 ************************************ 00:19:07.958 START TEST raid_state_function_test 00:19:07.958 ************************************ 00:19:07.958 12:03:13 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:07.958 12:03:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=127250 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127250' 00:19:07.959 Process raid pid: 127250 00:19:07.959 12:03:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127250 /var/tmp/spdk-raid.sock 00:19:07.959 12:03:13 -- common/autotest_common.sh@829 -- # '[' -z 127250 ']' 00:19:07.959 12:03:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:07.959 12:03:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.959 12:03:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:07.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:07.959 12:03:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.959 12:03:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.959 [2024-11-29 12:03:13.407696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:07.959 [2024-11-29 12:03:13.408174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.217 [2024-11-29 12:03:13.557187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.217 [2024-11-29 12:03:13.643802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.217 [2024-11-29 12:03:13.700677] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.152 12:03:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.152 12:03:14 -- common/autotest_common.sh@862 -- # return 0 00:19:09.152 12:03:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:09.152 [2024-11-29 12:03:14.650033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.152 [2024-11-29 12:03:14.650444] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.152 [2024-11-29 12:03:14.650575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.152 [2024-11-29 12:03:14.650645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.152 [2024-11-29 12:03:14.650860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.152 [2024-11-29 12:03:14.650955] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.410 12:03:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:09.410 12:03:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.411 12:03:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.669 12:03:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.669 "name": "Existed_Raid", 00:19:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.669 "strip_size_kb": 64, 00:19:09.669 "state": "configuring", 00:19:09.669 "raid_level": "concat", 00:19:09.669 "superblock": false, 00:19:09.669 "num_base_bdevs": 3, 00:19:09.669 "num_base_bdevs_discovered": 0, 00:19:09.669 "num_base_bdevs_operational": 3, 00:19:09.669 "base_bdevs_list": [ 00:19:09.669 { 00:19:09.669 "name": "BaseBdev1", 00:19:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.669 "is_configured": false, 00:19:09.669 "data_offset": 0, 00:19:09.669 "data_size": 0 00:19:09.669 }, 00:19:09.669 { 00:19:09.669 "name": "BaseBdev2", 00:19:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.669 "is_configured": false, 00:19:09.669 "data_offset": 0, 00:19:09.669 "data_size": 0 00:19:09.669 }, 00:19:09.669 { 00:19:09.669 "name": "BaseBdev3", 00:19:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.669 "is_configured": false, 00:19:09.669 "data_offset": 0, 00:19:09.669 "data_size": 0 00:19:09.669 } 00:19:09.669 ] 00:19:09.669 }' 00:19:09.669 12:03:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.669 12:03:14 -- common/autotest_common.sh@10 -- # set +x 00:19:10.234 12:03:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:10.492 [2024-11-29 12:03:15.818126] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:10.492 [2024-11-29 12:03:15.818451] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:10.492 12:03:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:10.750 [2024-11-29 12:03:16.046247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.750 [2024-11-29 12:03:16.046593] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.750 [2024-11-29 12:03:16.046723] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.750 [2024-11-29 12:03:16.046871] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.750 [2024-11-29 12:03:16.046981] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.750 [2024-11-29 12:03:16.047119] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.750 12:03:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:11.009 [2024-11-29 12:03:16.322062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.009 BaseBdev1 00:19:11.009 12:03:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:11.009 12:03:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:11.009 12:03:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:11.009 12:03:16 -- common/autotest_common.sh@899 -- # local i 00:19:11.009 12:03:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:11.009 12:03:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:11.009 12:03:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.267 12:03:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:11.526 [ 00:19:11.526 { 00:19:11.526 "name": "BaseBdev1", 00:19:11.526 "aliases": [ 00:19:11.526 "883fbb6f-2a8a-4b51-9313-8aec50e7dea0" 00:19:11.526 ], 00:19:11.526 "product_name": "Malloc disk", 00:19:11.526 "block_size": 512, 00:19:11.526 "num_blocks": 65536, 00:19:11.526 "uuid": "883fbb6f-2a8a-4b51-9313-8aec50e7dea0", 00:19:11.526 "assigned_rate_limits": { 00:19:11.526 "rw_ios_per_sec": 0, 00:19:11.526 "rw_mbytes_per_sec": 0, 00:19:11.526 "r_mbytes_per_sec": 0, 00:19:11.526 "w_mbytes_per_sec": 0 00:19:11.526 }, 00:19:11.526 "claimed": true, 00:19:11.526 "claim_type": "exclusive_write", 00:19:11.526 "zoned": false, 00:19:11.526 "supported_io_types": { 00:19:11.526 "read": true, 00:19:11.526 "write": true, 00:19:11.526 "unmap": true, 00:19:11.526 "write_zeroes": true, 00:19:11.526 "flush": true, 00:19:11.526 "reset": true, 00:19:11.526 "compare": false, 00:19:11.526 "compare_and_write": false, 00:19:11.526 "abort": true, 00:19:11.526 "nvme_admin": false, 00:19:11.526 "nvme_io": false 00:19:11.526 }, 00:19:11.526 "memory_domains": [ 00:19:11.526 { 00:19:11.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.526 "dma_device_type": 2 00:19:11.526 } 00:19:11.526 ], 00:19:11.526 "driver_specific": {} 00:19:11.526 } 00:19:11.526 ] 00:19:11.526 12:03:16 -- common/autotest_common.sh@905 -- # return 0 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.526 12:03:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.784 12:03:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.784 "name": "Existed_Raid", 00:19:11.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.784 "strip_size_kb": 64, 00:19:11.784 "state": "configuring", 00:19:11.784 "raid_level": "concat", 00:19:11.784 "superblock": false, 00:19:11.784 "num_base_bdevs": 3, 00:19:11.784 "num_base_bdevs_discovered": 1, 00:19:11.784 "num_base_bdevs_operational": 3, 00:19:11.784 "base_bdevs_list": [ 00:19:11.784 { 00:19:11.784 "name": "BaseBdev1", 00:19:11.784 "uuid": "883fbb6f-2a8a-4b51-9313-8aec50e7dea0", 00:19:11.784 "is_configured": true, 00:19:11.784 "data_offset": 0, 00:19:11.784 "data_size": 65536 00:19:11.784 }, 00:19:11.784 { 00:19:11.784 "name": "BaseBdev2", 00:19:11.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.784 "is_configured": false, 00:19:11.784 "data_offset": 0, 00:19:11.784 "data_size": 0 00:19:11.784 }, 00:19:11.784 { 00:19:11.784 "name": "BaseBdev3", 00:19:11.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.784 "is_configured": false, 00:19:11.784 "data_offset": 0, 00:19:11.784 "data_size": 0 00:19:11.784 } 00:19:11.784 ] 00:19:11.784 }' 00:19:11.784 12:03:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.784 12:03:17 -- common/autotest_common.sh@10 -- # set +x 00:19:12.351 12:03:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.610 [2024-11-29 12:03:18.038641] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.610 [2024-11-29 12:03:18.038966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:12.610 12:03:18 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:12.610 12:03:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:12.870 [2024-11-29 12:03:18.306821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.870 [2024-11-29 12:03:18.309409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.870 [2024-11-29 12:03:18.309600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.870 [2024-11-29 12:03:18.309713] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.870 [2024-11-29 12:03:18.309787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.870 12:03:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.129 12:03:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.129 "name": "Existed_Raid", 00:19:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.129 "strip_size_kb": 64, 00:19:13.129 "state": "configuring", 00:19:13.129 "raid_level": "concat", 00:19:13.129 "superblock": false, 00:19:13.129 "num_base_bdevs": 3, 00:19:13.129 "num_base_bdevs_discovered": 1, 00:19:13.129 "num_base_bdevs_operational": 3, 00:19:13.129 "base_bdevs_list": [ 00:19:13.129 { 00:19:13.129 "name": "BaseBdev1", 00:19:13.129 "uuid": "883fbb6f-2a8a-4b51-9313-8aec50e7dea0", 00:19:13.129 "is_configured": true, 00:19:13.129 "data_offset": 0, 00:19:13.129 "data_size": 65536 00:19:13.129 }, 00:19:13.129 { 00:19:13.129 "name": "BaseBdev2", 00:19:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.129 "is_configured": false, 00:19:13.129 "data_offset": 0, 00:19:13.129 "data_size": 0 00:19:13.129 }, 00:19:13.129 { 00:19:13.129 "name": "BaseBdev3", 00:19:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.129 "is_configured": false, 00:19:13.129 "data_offset": 0, 00:19:13.129 "data_size": 0 00:19:13.129 } 00:19:13.129 ] 00:19:13.129 }' 00:19:13.129 12:03:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.129 12:03:18 -- common/autotest_common.sh@10 -- # set +x 00:19:14.161 12:03:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:14.161 [2024-11-29 12:03:19.506850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.161 BaseBdev2 00:19:14.161 12:03:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:14.161 12:03:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:14.161 12:03:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.161 12:03:19 -- common/autotest_common.sh@899 -- # local i 00:19:14.161 12:03:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.161 12:03:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.161 12:03:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.420 12:03:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.678 [ 00:19:14.678 { 00:19:14.678 "name": "BaseBdev2", 00:19:14.678 "aliases": [ 00:19:14.678 "42955e02-59d6-4d6f-811a-e95118b191e0" 00:19:14.678 ], 00:19:14.678 "product_name": "Malloc disk", 00:19:14.678 "block_size": 512, 00:19:14.678 "num_blocks": 65536, 00:19:14.678 "uuid": "42955e02-59d6-4d6f-811a-e95118b191e0", 00:19:14.678 "assigned_rate_limits": { 00:19:14.678 "rw_ios_per_sec": 0, 00:19:14.678 "rw_mbytes_per_sec": 0, 00:19:14.678 "r_mbytes_per_sec": 0, 00:19:14.678 "w_mbytes_per_sec": 0 00:19:14.678 }, 00:19:14.678 "claimed": true, 00:19:14.678 "claim_type": "exclusive_write", 00:19:14.678 "zoned": false, 00:19:14.678 "supported_io_types": { 00:19:14.678 "read": true, 00:19:14.678 "write": true, 00:19:14.678 "unmap": true, 00:19:14.678 "write_zeroes": true, 00:19:14.678 "flush": true, 00:19:14.678 "reset": true, 00:19:14.678 "compare": false, 00:19:14.678 "compare_and_write": false, 00:19:14.678 "abort": true, 00:19:14.678 "nvme_admin": false, 00:19:14.678 "nvme_io": false 00:19:14.678 }, 00:19:14.678 "memory_domains": [ 00:19:14.678 { 00:19:14.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.678 "dma_device_type": 2 00:19:14.678 } 00:19:14.678 ], 00:19:14.678 "driver_specific": {} 00:19:14.678 } 00:19:14.678 ] 00:19:14.678 12:03:20 -- common/autotest_common.sh@905 -- # return 0 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.678 12:03:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.937 12:03:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.937 "name": "Existed_Raid", 00:19:14.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.937 "strip_size_kb": 64, 00:19:14.937 "state": "configuring", 00:19:14.937 "raid_level": "concat", 00:19:14.937 "superblock": false, 00:19:14.937 "num_base_bdevs": 3, 00:19:14.937 "num_base_bdevs_discovered": 2, 00:19:14.937 "num_base_bdevs_operational": 3, 00:19:14.937 "base_bdevs_list": [ 00:19:14.937 { 00:19:14.937 "name": "BaseBdev1", 00:19:14.937 "uuid": "883fbb6f-2a8a-4b51-9313-8aec50e7dea0", 00:19:14.937 "is_configured": true, 00:19:14.937 "data_offset": 0, 00:19:14.937 "data_size": 65536 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "name": "BaseBdev2", 00:19:14.937 "uuid": "42955e02-59d6-4d6f-811a-e95118b191e0", 00:19:14.937 "is_configured": true, 00:19:14.937 "data_offset": 0, 00:19:14.937 "data_size": 65536 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "name": "BaseBdev3", 00:19:14.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.937 "is_configured": false, 00:19:14.937 "data_offset": 0, 00:19:14.937 "data_size": 0 00:19:14.937 } 00:19:14.937 ] 00:19:14.937 }' 00:19:14.937 12:03:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.937 12:03:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.505 12:03:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:15.762 [2024-11-29 12:03:21.088185] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.762 [2024-11-29 12:03:21.088549] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:19:15.762 [2024-11-29 12:03:21.088601] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:15.762 [2024-11-29 12:03:21.088879] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:19:15.762 [2024-11-29 12:03:21.089440] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:19:15.762 [2024-11-29 12:03:21.089593] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:19:15.762 [2024-11-29 12:03:21.089992] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.762 BaseBdev3 00:19:15.762 12:03:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:15.762 12:03:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:15.762 12:03:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:15.762 12:03:21 -- common/autotest_common.sh@899 -- # local i 00:19:15.762 12:03:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:15.762 12:03:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:15.762 12:03:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.019 12:03:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:16.276 [ 00:19:16.276 { 00:19:16.276 "name": "BaseBdev3", 00:19:16.276 "aliases": [ 00:19:16.276 "8dd67c0a-d9fb-41fb-8bc7-01b050de7f55" 00:19:16.276 ], 00:19:16.276 "product_name": "Malloc disk", 00:19:16.276 "block_size": 512, 00:19:16.276 "num_blocks": 65536, 00:19:16.276 "uuid": "8dd67c0a-d9fb-41fb-8bc7-01b050de7f55", 00:19:16.276 "assigned_rate_limits": { 00:19:16.276 "rw_ios_per_sec": 0, 00:19:16.276 "rw_mbytes_per_sec": 0, 00:19:16.276 "r_mbytes_per_sec": 0, 00:19:16.276 "w_mbytes_per_sec": 0 00:19:16.276 }, 00:19:16.276 "claimed": true, 00:19:16.276 "claim_type": "exclusive_write", 00:19:16.276 "zoned": false, 00:19:16.276 "supported_io_types": { 00:19:16.276 "read": true, 00:19:16.276 "write": true, 00:19:16.276 "unmap": true, 00:19:16.276 "write_zeroes": true, 00:19:16.276 "flush": true, 00:19:16.276 "reset": true, 00:19:16.276 "compare": false, 00:19:16.276 "compare_and_write": false, 00:19:16.276 "abort": true, 00:19:16.276 "nvme_admin": false, 00:19:16.276 "nvme_io": false 00:19:16.276 }, 00:19:16.276 "memory_domains": [ 00:19:16.276 { 00:19:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.276 "dma_device_type": 2 00:19:16.276 } 00:19:16.276 ], 00:19:16.276 "driver_specific": {} 00:19:16.276 } 00:19:16.276 ] 00:19:16.276 12:03:21 -- common/autotest_common.sh@905 -- # return 0 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.276 12:03:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.534 12:03:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.534 "name": "Existed_Raid", 00:19:16.534 "uuid": "a52fa8fb-7b33-489c-af3c-3541f393ba8a", 00:19:16.534 "strip_size_kb": 64, 00:19:16.534 "state": "online", 00:19:16.534 "raid_level": "concat", 00:19:16.534 "superblock": false, 00:19:16.534 "num_base_bdevs": 3, 00:19:16.534 "num_base_bdevs_discovered": 3, 00:19:16.534 "num_base_bdevs_operational": 3, 00:19:16.534 "base_bdevs_list": [ 00:19:16.534 { 00:19:16.534 "name": "BaseBdev1", 00:19:16.534 "uuid": "883fbb6f-2a8a-4b51-9313-8aec50e7dea0", 00:19:16.534 "is_configured": true, 00:19:16.534 "data_offset": 0, 00:19:16.534 "data_size": 65536 00:19:16.534 }, 00:19:16.534 { 00:19:16.534 "name": "BaseBdev2", 00:19:16.534 "uuid": "42955e02-59d6-4d6f-811a-e95118b191e0", 00:19:16.534 "is_configured": true, 00:19:16.535 "data_offset": 0, 00:19:16.535 "data_size": 65536 00:19:16.535 }, 00:19:16.535 { 00:19:16.535 "name": "BaseBdev3", 00:19:16.535 "uuid": "8dd67c0a-d9fb-41fb-8bc7-01b050de7f55", 00:19:16.535 "is_configured": true, 00:19:16.535 "data_offset": 0, 00:19:16.535 "data_size": 65536 00:19:16.535 } 00:19:16.535 ] 00:19:16.535 }' 00:19:16.535 12:03:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.535 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:17.102 12:03:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.360 [2024-11-29 12:03:22.844834] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.360 [2024-11-29 12:03:22.845195] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.360 [2024-11-29 12:03:22.845398] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.619 12:03:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.878 12:03:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.878 "name": "Existed_Raid", 00:19:17.878 "uuid": "a52fa8fb-7b33-489c-af3c-3541f393ba8a", 00:19:17.878 "strip_size_kb": 64, 00:19:17.878 "state": "offline", 00:19:17.878 "raid_level": "concat", 00:19:17.878 "superblock": false, 00:19:17.878 "num_base_bdevs": 3, 00:19:17.878 "num_base_bdevs_discovered": 2, 00:19:17.878 "num_base_bdevs_operational": 2, 00:19:17.878 "base_bdevs_list": [ 00:19:17.878 { 00:19:17.878 "name": null, 00:19:17.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.878 "is_configured": false, 00:19:17.878 "data_offset": 0, 00:19:17.878 "data_size": 65536 00:19:17.878 }, 00:19:17.878 { 00:19:17.878 "name": "BaseBdev2", 00:19:17.878 "uuid": "42955e02-59d6-4d6f-811a-e95118b191e0", 00:19:17.878 "is_configured": true, 00:19:17.878 "data_offset": 0, 00:19:17.878 "data_size": 65536 00:19:17.878 }, 00:19:17.878 { 00:19:17.878 "name": "BaseBdev3", 00:19:17.878 "uuid": "8dd67c0a-d9fb-41fb-8bc7-01b050de7f55", 00:19:17.878 "is_configured": true, 00:19:17.878 "data_offset": 0, 00:19:17.878 "data_size": 65536 00:19:17.878 } 00:19:17.878 ] 00:19:17.878 }' 00:19:17.878 12:03:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.878 12:03:23 -- common/autotest_common.sh@10 -- # set +x 00:19:18.445 12:03:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:18.445 12:03:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.445 12:03:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.445 12:03:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:18.704 12:03:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:18.704 12:03:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.704 12:03:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.963 [2024-11-29 12:03:24.290947] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.963 12:03:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:18.963 12:03:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.963 12:03:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.963 12:03:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:19.222 12:03:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:19.222 12:03:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.222 12:03:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:19.480 [2024-11-29 12:03:24.831083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:19.480 [2024-11-29 12:03:24.831322] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:19:19.480 12:03:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:19.480 12:03:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:19.480 12:03:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.480 12:03:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:19.740 12:03:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:19.740 12:03:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:19.740 12:03:25 -- bdev/bdev_raid.sh@287 -- # killprocess 127250 00:19:19.740 12:03:25 -- common/autotest_common.sh@936 -- # '[' -z 127250 ']' 00:19:19.740 12:03:25 -- common/autotest_common.sh@940 -- # kill -0 127250 00:19:19.740 12:03:25 -- common/autotest_common.sh@941 -- # uname 00:19:19.740 12:03:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:19.740 12:03:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127250 00:19:19.740 12:03:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:19.740 12:03:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:19.740 12:03:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127250' 00:19:19.740 killing process with pid 127250 00:19:19.740 12:03:25 -- common/autotest_common.sh@955 -- # kill 127250 00:19:19.740 12:03:25 -- common/autotest_common.sh@960 -- # wait 127250 00:19:19.740 [2024-11-29 12:03:25.206081] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:19.740 [2024-11-29 12:03:25.206175] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.999 12:03:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:19.999 00:19:19.999 real 0m12.106s 00:19:19.999 user 0m22.325s 00:19:19.999 sys 0m1.471s 00:19:19.999 12:03:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:19.999 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:19:19.999 ************************************ 00:19:19.999 END TEST raid_state_function_test 00:19:19.999 ************************************ 00:19:19.999 12:03:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:19.999 12:03:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:19.999 12:03:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.999 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:19:20.299 ************************************ 00:19:20.299 START TEST raid_state_function_test_sb 00:19:20.299 ************************************ 00:19:20.299 12:03:25 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:20.299 12:03:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=127625 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:20.300 Process raid pid: 127625 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127625' 00:19:20.300 12:03:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127625 /var/tmp/spdk-raid.sock 00:19:20.300 12:03:25 -- common/autotest_common.sh@829 -- # '[' -z 127625 ']' 00:19:20.300 12:03:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:20.300 12:03:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.300 12:03:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:20.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:20.300 12:03:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.300 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:19:20.300 [2024-11-29 12:03:25.574208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:20.300 [2024-11-29 12:03:25.574743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.300 [2024-11-29 12:03:25.720539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.580 [2024-11-29 12:03:25.817866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.580 [2024-11-29 12:03:25.872375] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.147 12:03:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.147 12:03:26 -- common/autotest_common.sh@862 -- # return 0 00:19:21.147 12:03:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:21.406 [2024-11-29 12:03:26.872752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:21.406 [2024-11-29 12:03:26.873083] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:21.406 [2024-11-29 12:03:26.873234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:21.406 [2024-11-29 12:03:26.873380] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:21.406 [2024-11-29 12:03:26.873489] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:21.406 [2024-11-29 12:03:26.873585] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.406 12:03:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.972 12:03:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.972 "name": "Existed_Raid", 00:19:21.972 "uuid": "daa459f4-6d47-49ed-a3b3-ce0205094ed1", 00:19:21.972 "strip_size_kb": 64, 00:19:21.972 "state": "configuring", 00:19:21.972 "raid_level": "concat", 00:19:21.972 "superblock": true, 00:19:21.972 "num_base_bdevs": 3, 00:19:21.972 "num_base_bdevs_discovered": 0, 00:19:21.972 "num_base_bdevs_operational": 3, 00:19:21.972 "base_bdevs_list": [ 00:19:21.972 { 00:19:21.972 "name": "BaseBdev1", 00:19:21.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.972 "is_configured": false, 00:19:21.972 "data_offset": 0, 00:19:21.972 "data_size": 0 00:19:21.972 }, 00:19:21.972 { 00:19:21.972 "name": "BaseBdev2", 00:19:21.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.972 "is_configured": false, 00:19:21.972 "data_offset": 0, 00:19:21.972 "data_size": 0 00:19:21.972 }, 00:19:21.972 { 00:19:21.972 "name": "BaseBdev3", 00:19:21.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.973 "is_configured": false, 00:19:21.973 "data_offset": 0, 00:19:21.973 "data_size": 0 00:19:21.973 } 00:19:21.973 ] 00:19:21.973 }' 00:19:21.973 12:03:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.973 12:03:27 -- common/autotest_common.sh@10 -- # set +x 00:19:22.540 12:03:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:22.798 [2024-11-29 12:03:28.064833] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:22.798 [2024-11-29 12:03:28.065114] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:22.798 12:03:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:22.798 [2024-11-29 12:03:28.304965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.798 [2024-11-29 12:03:28.305260] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.798 [2024-11-29 12:03:28.305387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.798 [2024-11-29 12:03:28.305459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.798 [2024-11-29 12:03:28.305564] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.798 [2024-11-29 12:03:28.305635] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:23.057 12:03:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:23.316 [2024-11-29 12:03:28.577528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.316 BaseBdev1 00:19:23.316 12:03:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:23.316 12:03:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:23.316 12:03:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:23.316 12:03:28 -- common/autotest_common.sh@899 -- # local i 00:19:23.316 12:03:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:23.316 12:03:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:23.316 12:03:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.316 12:03:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:23.575 [ 00:19:23.575 { 00:19:23.575 "name": "BaseBdev1", 00:19:23.575 "aliases": [ 00:19:23.575 "70eae80a-940c-4206-8a6d-b537156544af" 00:19:23.575 ], 00:19:23.575 "product_name": "Malloc disk", 00:19:23.575 "block_size": 512, 00:19:23.575 "num_blocks": 65536, 00:19:23.575 "uuid": "70eae80a-940c-4206-8a6d-b537156544af", 00:19:23.575 "assigned_rate_limits": { 00:19:23.575 "rw_ios_per_sec": 0, 00:19:23.575 "rw_mbytes_per_sec": 0, 00:19:23.575 "r_mbytes_per_sec": 0, 00:19:23.575 "w_mbytes_per_sec": 0 00:19:23.575 }, 00:19:23.575 "claimed": true, 00:19:23.575 "claim_type": "exclusive_write", 00:19:23.575 "zoned": false, 00:19:23.575 "supported_io_types": { 00:19:23.575 "read": true, 00:19:23.575 "write": true, 00:19:23.575 "unmap": true, 00:19:23.575 "write_zeroes": true, 00:19:23.575 "flush": true, 00:19:23.575 "reset": true, 00:19:23.575 "compare": false, 00:19:23.575 "compare_and_write": false, 00:19:23.575 "abort": true, 00:19:23.575 "nvme_admin": false, 00:19:23.575 "nvme_io": false 00:19:23.575 }, 00:19:23.575 "memory_domains": [ 00:19:23.575 { 00:19:23.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.575 "dma_device_type": 2 00:19:23.575 } 00:19:23.575 ], 00:19:23.575 "driver_specific": {} 00:19:23.575 } 00:19:23.575 ] 00:19:23.575 12:03:29 -- common/autotest_common.sh@905 -- # return 0 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.575 12:03:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.834 12:03:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.834 "name": "Existed_Raid", 00:19:23.834 "uuid": "91662e04-eae9-4f33-9f6c-e46a9059f78d", 00:19:23.834 "strip_size_kb": 64, 00:19:23.834 "state": "configuring", 00:19:23.834 "raid_level": "concat", 00:19:23.834 "superblock": true, 00:19:23.834 "num_base_bdevs": 3, 00:19:23.834 "num_base_bdevs_discovered": 1, 00:19:23.834 "num_base_bdevs_operational": 3, 00:19:23.834 "base_bdevs_list": [ 00:19:23.834 { 00:19:23.834 "name": "BaseBdev1", 00:19:23.835 "uuid": "70eae80a-940c-4206-8a6d-b537156544af", 00:19:23.835 "is_configured": true, 00:19:23.835 "data_offset": 2048, 00:19:23.835 "data_size": 63488 00:19:23.835 }, 00:19:23.835 { 00:19:23.835 "name": "BaseBdev2", 00:19:23.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.835 "is_configured": false, 00:19:23.835 "data_offset": 0, 00:19:23.835 "data_size": 0 00:19:23.835 }, 00:19:23.835 { 00:19:23.835 "name": "BaseBdev3", 00:19:23.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.835 "is_configured": false, 00:19:23.835 "data_offset": 0, 00:19:23.835 "data_size": 0 00:19:23.835 } 00:19:23.835 ] 00:19:23.835 }' 00:19:23.835 12:03:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.835 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.772 12:03:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:24.772 [2024-11-29 12:03:30.222000] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.772 [2024-11-29 12:03:30.222321] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:24.772 12:03:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:24.772 12:03:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:25.031 12:03:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:25.289 BaseBdev1 00:19:25.289 12:03:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:25.289 12:03:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:25.289 12:03:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:25.289 12:03:30 -- common/autotest_common.sh@899 -- # local i 00:19:25.289 12:03:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:25.289 12:03:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:25.289 12:03:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.548 12:03:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.806 [ 00:19:25.807 { 00:19:25.807 "name": "BaseBdev1", 00:19:25.807 "aliases": [ 00:19:25.807 "c629904d-d713-4503-940b-f65db13db43d" 00:19:25.807 ], 00:19:25.807 "product_name": "Malloc disk", 00:19:25.807 "block_size": 512, 00:19:25.807 "num_blocks": 65536, 00:19:25.807 "uuid": "c629904d-d713-4503-940b-f65db13db43d", 00:19:25.807 "assigned_rate_limits": { 00:19:25.807 "rw_ios_per_sec": 0, 00:19:25.807 "rw_mbytes_per_sec": 0, 00:19:25.807 "r_mbytes_per_sec": 0, 00:19:25.807 "w_mbytes_per_sec": 0 00:19:25.807 }, 00:19:25.807 "claimed": false, 00:19:25.807 "zoned": false, 00:19:25.807 "supported_io_types": { 00:19:25.807 "read": true, 00:19:25.807 "write": true, 00:19:25.807 "unmap": true, 00:19:25.807 "write_zeroes": true, 00:19:25.807 "flush": true, 00:19:25.807 "reset": true, 00:19:25.807 "compare": false, 00:19:25.807 "compare_and_write": false, 00:19:25.807 "abort": true, 00:19:25.807 "nvme_admin": false, 00:19:25.807 "nvme_io": false 00:19:25.807 }, 00:19:25.807 "memory_domains": [ 00:19:25.807 { 00:19:25.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.807 "dma_device_type": 2 00:19:25.807 } 00:19:25.807 ], 00:19:25.807 "driver_specific": {} 00:19:25.807 } 00:19:25.807 ] 00:19:25.807 12:03:31 -- common/autotest_common.sh@905 -- # return 0 00:19:25.807 12:03:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:26.066 [2024-11-29 12:03:31.529734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.066 [2024-11-29 12:03:31.532226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.066 [2024-11-29 12:03:31.532464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.066 [2024-11-29 12:03:31.532583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.066 [2024-11-29 12:03:31.532770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.066 12:03:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.325 12:03:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.325 "name": "Existed_Raid", 00:19:26.325 "uuid": "7476c8df-f269-4470-8402-525b42308f57", 00:19:26.325 "strip_size_kb": 64, 00:19:26.325 "state": "configuring", 00:19:26.325 "raid_level": "concat", 00:19:26.325 "superblock": true, 00:19:26.325 "num_base_bdevs": 3, 00:19:26.325 "num_base_bdevs_discovered": 1, 00:19:26.325 "num_base_bdevs_operational": 3, 00:19:26.325 "base_bdevs_list": [ 00:19:26.325 { 00:19:26.325 "name": "BaseBdev1", 00:19:26.325 "uuid": "c629904d-d713-4503-940b-f65db13db43d", 00:19:26.325 "is_configured": true, 00:19:26.325 "data_offset": 2048, 00:19:26.325 "data_size": 63488 00:19:26.325 }, 00:19:26.325 { 00:19:26.325 "name": "BaseBdev2", 00:19:26.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.325 "is_configured": false, 00:19:26.325 "data_offset": 0, 00:19:26.325 "data_size": 0 00:19:26.325 }, 00:19:26.325 { 00:19:26.325 "name": "BaseBdev3", 00:19:26.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.325 "is_configured": false, 00:19:26.325 "data_offset": 0, 00:19:26.325 "data_size": 0 00:19:26.325 } 00:19:26.325 ] 00:19:26.325 }' 00:19:26.325 12:03:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.325 12:03:31 -- common/autotest_common.sh@10 -- # set +x 00:19:27.319 12:03:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:27.319 [2024-11-29 12:03:32.782572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.319 BaseBdev2 00:19:27.319 12:03:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:27.319 12:03:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:27.319 12:03:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:27.319 12:03:32 -- common/autotest_common.sh@899 -- # local i 00:19:27.319 12:03:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:27.319 12:03:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:27.319 12:03:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.577 12:03:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:27.835 [ 00:19:27.835 { 00:19:27.835 "name": "BaseBdev2", 00:19:27.835 "aliases": [ 00:19:27.835 "dc15d85e-a8d7-4231-b7be-99e27296ca0e" 00:19:27.835 ], 00:19:27.835 "product_name": "Malloc disk", 00:19:27.835 "block_size": 512, 00:19:27.835 "num_blocks": 65536, 00:19:27.835 "uuid": "dc15d85e-a8d7-4231-b7be-99e27296ca0e", 00:19:27.835 "assigned_rate_limits": { 00:19:27.835 "rw_ios_per_sec": 0, 00:19:27.835 "rw_mbytes_per_sec": 0, 00:19:27.835 "r_mbytes_per_sec": 0, 00:19:27.835 "w_mbytes_per_sec": 0 00:19:27.835 }, 00:19:27.835 "claimed": true, 00:19:27.835 "claim_type": "exclusive_write", 00:19:27.835 "zoned": false, 00:19:27.835 "supported_io_types": { 00:19:27.835 "read": true, 00:19:27.835 "write": true, 00:19:27.835 "unmap": true, 00:19:27.835 "write_zeroes": true, 00:19:27.835 "flush": true, 00:19:27.835 "reset": true, 00:19:27.835 "compare": false, 00:19:27.835 "compare_and_write": false, 00:19:27.835 "abort": true, 00:19:27.835 "nvme_admin": false, 00:19:27.835 "nvme_io": false 00:19:27.835 }, 00:19:27.835 "memory_domains": [ 00:19:27.835 { 00:19:27.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.835 "dma_device_type": 2 00:19:27.835 } 00:19:27.835 ], 00:19:27.835 "driver_specific": {} 00:19:27.835 } 00:19:27.835 ] 00:19:27.835 12:03:33 -- common/autotest_common.sh@905 -- # return 0 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.835 12:03:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.092 12:03:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.092 12:03:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.349 12:03:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.349 "name": "Existed_Raid", 00:19:28.349 "uuid": "7476c8df-f269-4470-8402-525b42308f57", 00:19:28.349 "strip_size_kb": 64, 00:19:28.349 "state": "configuring", 00:19:28.349 "raid_level": "concat", 00:19:28.349 "superblock": true, 00:19:28.349 "num_base_bdevs": 3, 00:19:28.349 "num_base_bdevs_discovered": 2, 00:19:28.349 "num_base_bdevs_operational": 3, 00:19:28.349 "base_bdevs_list": [ 00:19:28.349 { 00:19:28.349 "name": "BaseBdev1", 00:19:28.349 "uuid": "c629904d-d713-4503-940b-f65db13db43d", 00:19:28.349 "is_configured": true, 00:19:28.349 "data_offset": 2048, 00:19:28.349 "data_size": 63488 00:19:28.349 }, 00:19:28.349 { 00:19:28.349 "name": "BaseBdev2", 00:19:28.349 "uuid": "dc15d85e-a8d7-4231-b7be-99e27296ca0e", 00:19:28.349 "is_configured": true, 00:19:28.349 "data_offset": 2048, 00:19:28.349 "data_size": 63488 00:19:28.349 }, 00:19:28.349 { 00:19:28.349 "name": "BaseBdev3", 00:19:28.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.349 "is_configured": false, 00:19:28.349 "data_offset": 0, 00:19:28.349 "data_size": 0 00:19:28.349 } 00:19:28.349 ] 00:19:28.349 }' 00:19:28.349 12:03:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.349 12:03:33 -- common/autotest_common.sh@10 -- # set +x 00:19:28.911 12:03:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:29.168 [2024-11-29 12:03:34.440632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:29.168 [2024-11-29 12:03:34.440939] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:19:29.168 [2024-11-29 12:03:34.440956] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:29.168 [2024-11-29 12:03:34.441153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:19:29.168 [2024-11-29 12:03:34.441642] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:19:29.168 [2024-11-29 12:03:34.441670] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:19:29.168 [2024-11-29 12:03:34.441840] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.168 BaseBdev3 00:19:29.168 12:03:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:29.168 12:03:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:29.168 12:03:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:29.168 12:03:34 -- common/autotest_common.sh@899 -- # local i 00:19:29.168 12:03:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:29.168 12:03:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:29.168 12:03:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.423 12:03:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:29.423 [ 00:19:29.423 { 00:19:29.423 "name": "BaseBdev3", 00:19:29.423 "aliases": [ 00:19:29.423 "63706455-bfb4-4ae8-9010-480d91035204" 00:19:29.423 ], 00:19:29.423 "product_name": "Malloc disk", 00:19:29.423 "block_size": 512, 00:19:29.423 "num_blocks": 65536, 00:19:29.423 "uuid": "63706455-bfb4-4ae8-9010-480d91035204", 00:19:29.423 "assigned_rate_limits": { 00:19:29.423 "rw_ios_per_sec": 0, 00:19:29.423 "rw_mbytes_per_sec": 0, 00:19:29.423 "r_mbytes_per_sec": 0, 00:19:29.423 "w_mbytes_per_sec": 0 00:19:29.423 }, 00:19:29.423 "claimed": true, 00:19:29.423 "claim_type": "exclusive_write", 00:19:29.423 "zoned": false, 00:19:29.423 "supported_io_types": { 00:19:29.423 "read": true, 00:19:29.423 "write": true, 00:19:29.423 "unmap": true, 00:19:29.423 "write_zeroes": true, 00:19:29.423 "flush": true, 00:19:29.423 "reset": true, 00:19:29.423 "compare": false, 00:19:29.423 "compare_and_write": false, 00:19:29.423 "abort": true, 00:19:29.423 "nvme_admin": false, 00:19:29.423 "nvme_io": false 00:19:29.423 }, 00:19:29.423 "memory_domains": [ 00:19:29.423 { 00:19:29.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.423 "dma_device_type": 2 00:19:29.423 } 00:19:29.423 ], 00:19:29.423 "driver_specific": {} 00:19:29.423 } 00:19:29.423 ] 00:19:29.423 12:03:34 -- common/autotest_common.sh@905 -- # return 0 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.423 12:03:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.987 12:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.987 "name": "Existed_Raid", 00:19:29.987 "uuid": "7476c8df-f269-4470-8402-525b42308f57", 00:19:29.987 "strip_size_kb": 64, 00:19:29.987 "state": "online", 00:19:29.987 "raid_level": "concat", 00:19:29.987 "superblock": true, 00:19:29.987 "num_base_bdevs": 3, 00:19:29.987 "num_base_bdevs_discovered": 3, 00:19:29.988 "num_base_bdevs_operational": 3, 00:19:29.988 "base_bdevs_list": [ 00:19:29.988 { 00:19:29.988 "name": "BaseBdev1", 00:19:29.988 "uuid": "c629904d-d713-4503-940b-f65db13db43d", 00:19:29.988 "is_configured": true, 00:19:29.988 "data_offset": 2048, 00:19:29.988 "data_size": 63488 00:19:29.988 }, 00:19:29.988 { 00:19:29.988 "name": "BaseBdev2", 00:19:29.988 "uuid": "dc15d85e-a8d7-4231-b7be-99e27296ca0e", 00:19:29.988 "is_configured": true, 00:19:29.988 "data_offset": 2048, 00:19:29.988 "data_size": 63488 00:19:29.988 }, 00:19:29.988 { 00:19:29.988 "name": "BaseBdev3", 00:19:29.988 "uuid": "63706455-bfb4-4ae8-9010-480d91035204", 00:19:29.988 "is_configured": true, 00:19:29.988 "data_offset": 2048, 00:19:29.988 "data_size": 63488 00:19:29.988 } 00:19:29.988 ] 00:19:29.988 }' 00:19:29.988 12:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.988 12:03:35 -- common/autotest_common.sh@10 -- # set +x 00:19:30.554 12:03:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:30.811 [2024-11-29 12:03:36.073283] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.811 [2024-11-29 12:03:36.073336] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.811 [2024-11-29 12:03:36.073415] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.811 12:03:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.069 12:03:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.069 "name": "Existed_Raid", 00:19:31.069 "uuid": "7476c8df-f269-4470-8402-525b42308f57", 00:19:31.069 "strip_size_kb": 64, 00:19:31.069 "state": "offline", 00:19:31.069 "raid_level": "concat", 00:19:31.069 "superblock": true, 00:19:31.069 "num_base_bdevs": 3, 00:19:31.069 "num_base_bdevs_discovered": 2, 00:19:31.069 "num_base_bdevs_operational": 2, 00:19:31.069 "base_bdevs_list": [ 00:19:31.069 { 00:19:31.069 "name": null, 00:19:31.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.069 "is_configured": false, 00:19:31.069 "data_offset": 2048, 00:19:31.069 "data_size": 63488 00:19:31.069 }, 00:19:31.069 { 00:19:31.069 "name": "BaseBdev2", 00:19:31.069 "uuid": "dc15d85e-a8d7-4231-b7be-99e27296ca0e", 00:19:31.069 "is_configured": true, 00:19:31.069 "data_offset": 2048, 00:19:31.069 "data_size": 63488 00:19:31.069 }, 00:19:31.069 { 00:19:31.069 "name": "BaseBdev3", 00:19:31.069 "uuid": "63706455-bfb4-4ae8-9010-480d91035204", 00:19:31.069 "is_configured": true, 00:19:31.069 "data_offset": 2048, 00:19:31.069 "data_size": 63488 00:19:31.069 } 00:19:31.069 ] 00:19:31.069 }' 00:19:31.069 12:03:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.069 12:03:36 -- common/autotest_common.sh@10 -- # set +x 00:19:31.635 12:03:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:31.635 12:03:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:31.635 12:03:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.635 12:03:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:31.894 12:03:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:31.894 12:03:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.894 12:03:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:32.153 [2024-11-29 12:03:37.585172] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:32.153 12:03:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:32.153 12:03:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:32.153 12:03:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.153 12:03:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:32.412 12:03:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:32.412 12:03:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.412 12:03:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:32.670 [2024-11-29 12:03:38.068791] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:32.670 [2024-11-29 12:03:38.068875] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:19:32.670 12:03:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:32.670 12:03:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:32.670 12:03:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.670 12:03:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:32.928 12:03:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:32.928 12:03:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:32.928 12:03:38 -- bdev/bdev_raid.sh@287 -- # killprocess 127625 00:19:32.928 12:03:38 -- common/autotest_common.sh@936 -- # '[' -z 127625 ']' 00:19:32.928 12:03:38 -- common/autotest_common.sh@940 -- # kill -0 127625 00:19:32.928 12:03:38 -- common/autotest_common.sh@941 -- # uname 00:19:32.928 12:03:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.928 12:03:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127625 00:19:32.928 12:03:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.928 12:03:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.928 12:03:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127625' 00:19:32.928 killing process with pid 127625 00:19:32.928 12:03:38 -- common/autotest_common.sh@955 -- # kill 127625 00:19:32.928 [2024-11-29 12:03:38.359856] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.928 [2024-11-29 12:03:38.359976] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.928 12:03:38 -- common/autotest_common.sh@960 -- # wait 127625 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:33.187 00:19:33.187 real 0m13.099s 00:19:33.187 user 0m23.939s 00:19:33.187 sys 0m1.746s 00:19:33.187 12:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:33.187 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:19:33.187 ************************************ 00:19:33.187 END TEST raid_state_function_test_sb 00:19:33.187 ************************************ 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:33.187 12:03:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:33.187 12:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.187 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:19:33.187 ************************************ 00:19:33.187 START TEST raid_superblock_test 00:19:33.187 ************************************ 00:19:33.187 12:03:38 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=128022 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:33.187 12:03:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128022 /var/tmp/spdk-raid.sock 00:19:33.187 12:03:38 -- common/autotest_common.sh@829 -- # '[' -z 128022 ']' 00:19:33.187 12:03:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:33.187 12:03:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:33.187 12:03:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:33.187 12:03:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.187 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:19:33.446 [2024-11-29 12:03:38.739550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:33.446 [2024-11-29 12:03:38.739825] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128022 ] 00:19:33.446 [2024-11-29 12:03:38.888468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.704 [2024-11-29 12:03:38.979970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.704 [2024-11-29 12:03:39.036290] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.270 12:03:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.270 12:03:39 -- common/autotest_common.sh@862 -- # return 0 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.270 12:03:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:34.529 malloc1 00:19:34.529 12:03:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.787 [2024-11-29 12:03:40.241882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.787 [2024-11-29 12:03:40.242005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.787 [2024-11-29 12:03:40.242058] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:34.787 [2024-11-29 12:03:40.242113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.787 [2024-11-29 12:03:40.245014] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.787 [2024-11-29 12:03:40.245103] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.787 pt1 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.787 12:03:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:35.044 malloc2 00:19:35.045 12:03:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.303 [2024-11-29 12:03:40.785546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.303 [2024-11-29 12:03:40.785676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.303 [2024-11-29 12:03:40.785721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:35.303 [2024-11-29 12:03:40.785769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.303 [2024-11-29 12:03:40.788394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.303 [2024-11-29 12:03:40.788457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.303 pt2 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.303 12:03:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:35.590 malloc3 00:19:35.590 12:03:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:35.848 [2024-11-29 12:03:41.329741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:35.848 [2024-11-29 12:03:41.329875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.848 [2024-11-29 12:03:41.329925] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:35.848 [2024-11-29 12:03:41.329975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.848 [2024-11-29 12:03:41.332637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.848 [2024-11-29 12:03:41.332700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:35.848 pt3 00:19:35.848 12:03:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:35.848 12:03:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.848 12:03:41 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:36.107 [2024-11-29 12:03:41.597924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.107 [2024-11-29 12:03:41.600299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.107 [2024-11-29 12:03:41.600399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:36.107 [2024-11-29 12:03:41.600643] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:36.107 [2024-11-29 12:03:41.600669] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:36.107 [2024-11-29 12:03:41.600848] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:36.107 [2024-11-29 12:03:41.601312] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:36.107 [2024-11-29 12:03:41.601337] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:19:36.107 [2024-11-29 12:03:41.601537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.107 12:03:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.383 12:03:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.383 "name": "raid_bdev1", 00:19:36.383 "uuid": "db58d258-5091-4bf8-9d1e-35a0476b3300", 00:19:36.383 "strip_size_kb": 64, 00:19:36.383 "state": "online", 00:19:36.383 "raid_level": "concat", 00:19:36.383 "superblock": true, 00:19:36.383 "num_base_bdevs": 3, 00:19:36.383 "num_base_bdevs_discovered": 3, 00:19:36.383 "num_base_bdevs_operational": 3, 00:19:36.383 "base_bdevs_list": [ 00:19:36.383 { 00:19:36.383 "name": "pt1", 00:19:36.383 "uuid": "9ea2ecbb-7b68-56ef-a12a-d32f17b30a88", 00:19:36.383 "is_configured": true, 00:19:36.383 "data_offset": 2048, 00:19:36.383 "data_size": 63488 00:19:36.383 }, 00:19:36.383 { 00:19:36.383 "name": "pt2", 00:19:36.383 "uuid": "7167b009-d23e-5c1c-98fa-93012ea25592", 00:19:36.383 "is_configured": true, 00:19:36.383 "data_offset": 2048, 00:19:36.383 "data_size": 63488 00:19:36.383 }, 00:19:36.383 { 00:19:36.383 "name": "pt3", 00:19:36.383 "uuid": "d92639b2-fc3d-5fb3-9b67-1e5c7e6eb0c1", 00:19:36.383 "is_configured": true, 00:19:36.383 "data_offset": 2048, 00:19:36.383 "data_size": 63488 00:19:36.383 } 00:19:36.383 ] 00:19:36.383 }' 00:19:36.383 12:03:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.383 12:03:41 -- common/autotest_common.sh@10 -- # set +x 00:19:37.318 12:03:42 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.318 12:03:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:37.318 [2024-11-29 12:03:42.790395] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.318 12:03:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=db58d258-5091-4bf8-9d1e-35a0476b3300 00:19:37.318 12:03:42 -- bdev/bdev_raid.sh@380 -- # '[' -z db58d258-5091-4bf8-9d1e-35a0476b3300 ']' 00:19:37.318 12:03:42 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:37.576 [2024-11-29 12:03:43.066114] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.576 [2024-11-29 12:03:43.066154] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.576 [2024-11-29 12:03:43.066281] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.576 [2024-11-29 12:03:43.066386] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.576 [2024-11-29 12:03:43.066403] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:19:37.576 12:03:43 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.576 12:03:43 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:37.835 12:03:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:37.835 12:03:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:37.835 12:03:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.835 12:03:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:38.093 12:03:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.093 12:03:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:38.352 12:03:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.353 12:03:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:38.611 12:03:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:38.612 12:03:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:38.871 12:03:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:38.871 12:03:44 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:38.871 12:03:44 -- common/autotest_common.sh@650 -- # local es=0 00:19:38.871 12:03:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:38.871 12:03:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.871 12:03:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.871 12:03:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.871 12:03:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.871 12:03:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.871 12:03:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.871 12:03:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.871 12:03:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:38.871 12:03:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:39.130 [2024-11-29 12:03:44.602494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:39.130 [2024-11-29 12:03:44.604800] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:39.131 [2024-11-29 12:03:44.604863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:39.131 [2024-11-29 12:03:44.604928] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:39.131 [2024-11-29 12:03:44.605033] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:39.131 [2024-11-29 12:03:44.605074] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:39.131 [2024-11-29 12:03:44.605125] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.131 [2024-11-29 12:03:44.605139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:19:39.131 request: 00:19:39.131 { 00:19:39.131 "name": "raid_bdev1", 00:19:39.131 "raid_level": "concat", 00:19:39.131 "base_bdevs": [ 00:19:39.131 "malloc1", 00:19:39.131 "malloc2", 00:19:39.131 "malloc3" 00:19:39.131 ], 00:19:39.131 "superblock": false, 00:19:39.131 "strip_size_kb": 64, 00:19:39.131 "method": "bdev_raid_create", 00:19:39.131 "req_id": 1 00:19:39.131 } 00:19:39.131 Got JSON-RPC error response 00:19:39.131 response: 00:19:39.131 { 00:19:39.131 "code": -17, 00:19:39.131 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:39.131 } 00:19:39.131 12:03:44 -- common/autotest_common.sh@653 -- # es=1 00:19:39.131 12:03:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.131 12:03:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.131 12:03:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.131 12:03:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.131 12:03:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:39.390 12:03:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:39.390 12:03:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:39.390 12:03:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.649 [2024-11-29 12:03:45.154509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.649 [2024-11-29 12:03:45.154643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.650 [2024-11-29 12:03:45.154690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:39.650 [2024-11-29 12:03:45.154719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.650 [2024-11-29 12:03:45.157445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.650 [2024-11-29 12:03:45.157504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.650 [2024-11-29 12:03:45.157633] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:39.650 [2024-11-29 12:03:45.157728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.650 pt1 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.908 12:03:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.167 12:03:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.167 "name": "raid_bdev1", 00:19:40.167 "uuid": "db58d258-5091-4bf8-9d1e-35a0476b3300", 00:19:40.167 "strip_size_kb": 64, 00:19:40.167 "state": "configuring", 00:19:40.167 "raid_level": "concat", 00:19:40.167 "superblock": true, 00:19:40.167 "num_base_bdevs": 3, 00:19:40.167 "num_base_bdevs_discovered": 1, 00:19:40.167 "num_base_bdevs_operational": 3, 00:19:40.167 "base_bdevs_list": [ 00:19:40.167 { 00:19:40.167 "name": "pt1", 00:19:40.167 "uuid": "9ea2ecbb-7b68-56ef-a12a-d32f17b30a88", 00:19:40.167 "is_configured": true, 00:19:40.167 "data_offset": 2048, 00:19:40.167 "data_size": 63488 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "name": null, 00:19:40.167 "uuid": "7167b009-d23e-5c1c-98fa-93012ea25592", 00:19:40.167 "is_configured": false, 00:19:40.167 "data_offset": 2048, 00:19:40.167 "data_size": 63488 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "name": null, 00:19:40.167 "uuid": "d92639b2-fc3d-5fb3-9b67-1e5c7e6eb0c1", 00:19:40.167 "is_configured": false, 00:19:40.167 "data_offset": 2048, 00:19:40.167 "data_size": 63488 00:19:40.167 } 00:19:40.167 ] 00:19:40.167 }' 00:19:40.167 12:03:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.167 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:19:40.734 12:03:46 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:19:40.734 12:03:46 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.734 [2024-11-29 12:03:46.246792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.734 [2024-11-29 12:03:46.246902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.734 [2024-11-29 12:03:46.246968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:40.734 [2024-11-29 12:03:46.247012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.734 [2024-11-29 12:03:46.247489] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.734 [2024-11-29 12:03:46.247540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.734 [2024-11-29 12:03:46.247653] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:40.734 [2024-11-29 12:03:46.247683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.993 pt2 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:40.993 [2024-11-29 12:03:46.470884] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.993 12:03:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.252 12:03:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.252 "name": "raid_bdev1", 00:19:41.252 "uuid": "db58d258-5091-4bf8-9d1e-35a0476b3300", 00:19:41.252 "strip_size_kb": 64, 00:19:41.252 "state": "configuring", 00:19:41.252 "raid_level": "concat", 00:19:41.252 "superblock": true, 00:19:41.252 "num_base_bdevs": 3, 00:19:41.252 "num_base_bdevs_discovered": 1, 00:19:41.252 "num_base_bdevs_operational": 3, 00:19:41.252 "base_bdevs_list": [ 00:19:41.252 { 00:19:41.252 "name": "pt1", 00:19:41.252 "uuid": "9ea2ecbb-7b68-56ef-a12a-d32f17b30a88", 00:19:41.252 "is_configured": true, 00:19:41.252 "data_offset": 2048, 00:19:41.252 "data_size": 63488 00:19:41.252 }, 00:19:41.252 { 00:19:41.252 "name": null, 00:19:41.252 "uuid": "7167b009-d23e-5c1c-98fa-93012ea25592", 00:19:41.252 "is_configured": false, 00:19:41.252 "data_offset": 2048, 00:19:41.252 "data_size": 63488 00:19:41.252 }, 00:19:41.252 { 00:19:41.252 "name": null, 00:19:41.252 "uuid": "d92639b2-fc3d-5fb3-9b67-1e5c7e6eb0c1", 00:19:41.252 "is_configured": false, 00:19:41.252 "data_offset": 2048, 00:19:41.252 "data_size": 63488 00:19:41.252 } 00:19:41.252 ] 00:19:41.252 }' 00:19:41.252 12:03:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.252 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:19:42.231 12:03:47 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:42.231 12:03:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.231 12:03:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.231 [2024-11-29 12:03:47.727100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.231 [2024-11-29 12:03:47.727220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.231 [2024-11-29 12:03:47.727262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:42.231 [2024-11-29 12:03:47.727295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.231 [2024-11-29 12:03:47.727810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.231 [2024-11-29 12:03:47.727862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.231 [2024-11-29 12:03:47.727970] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:42.231 [2024-11-29 12:03:47.727998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.231 pt2 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.491 [2024-11-29 12:03:47.959175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.491 [2024-11-29 12:03:47.959274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.491 [2024-11-29 12:03:47.959324] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:42.491 [2024-11-29 12:03:47.959357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.491 [2024-11-29 12:03:47.959851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.491 [2024-11-29 12:03:47.959917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.491 [2024-11-29 12:03:47.960044] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:42.491 [2024-11-29 12:03:47.960080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.491 [2024-11-29 12:03:47.960239] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:42.491 [2024-11-29 12:03:47.960264] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:42.491 [2024-11-29 12:03:47.960372] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:19:42.491 [2024-11-29 12:03:47.960723] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:42.491 [2024-11-29 12:03:47.960748] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:42.491 [2024-11-29 12:03:47.960861] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.491 pt3 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.491 12:03:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.750 12:03:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.750 "name": "raid_bdev1", 00:19:42.750 "uuid": "db58d258-5091-4bf8-9d1e-35a0476b3300", 00:19:42.750 "strip_size_kb": 64, 00:19:42.750 "state": "online", 00:19:42.750 "raid_level": "concat", 00:19:42.750 "superblock": true, 00:19:42.750 "num_base_bdevs": 3, 00:19:42.750 "num_base_bdevs_discovered": 3, 00:19:42.750 "num_base_bdevs_operational": 3, 00:19:42.750 "base_bdevs_list": [ 00:19:42.750 { 00:19:42.750 "name": "pt1", 00:19:42.750 "uuid": "9ea2ecbb-7b68-56ef-a12a-d32f17b30a88", 00:19:42.750 "is_configured": true, 00:19:42.750 "data_offset": 2048, 00:19:42.750 "data_size": 63488 00:19:42.750 }, 00:19:42.750 { 00:19:42.750 "name": "pt2", 00:19:42.750 "uuid": "7167b009-d23e-5c1c-98fa-93012ea25592", 00:19:42.750 "is_configured": true, 00:19:42.750 "data_offset": 2048, 00:19:42.750 "data_size": 63488 00:19:42.750 }, 00:19:42.750 { 00:19:42.750 "name": "pt3", 00:19:42.750 "uuid": "d92639b2-fc3d-5fb3-9b67-1e5c7e6eb0c1", 00:19:42.750 "is_configured": true, 00:19:42.750 "data_offset": 2048, 00:19:42.750 "data_size": 63488 00:19:42.750 } 00:19:42.750 ] 00:19:42.750 }' 00:19:42.750 12:03:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.750 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:19:43.688 12:03:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.688 12:03:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:43.947 [2024-11-29 12:03:49.215737] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.947 12:03:49 -- bdev/bdev_raid.sh@430 -- # '[' db58d258-5091-4bf8-9d1e-35a0476b3300 '!=' db58d258-5091-4bf8-9d1e-35a0476b3300 ']' 00:19:43.947 12:03:49 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:43.947 12:03:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:43.947 12:03:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:43.947 12:03:49 -- bdev/bdev_raid.sh@511 -- # killprocess 128022 00:19:43.947 12:03:49 -- common/autotest_common.sh@936 -- # '[' -z 128022 ']' 00:19:43.947 12:03:49 -- common/autotest_common.sh@940 -- # kill -0 128022 00:19:43.947 12:03:49 -- common/autotest_common.sh@941 -- # uname 00:19:43.947 12:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.947 12:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128022 00:19:43.947 12:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:43.947 12:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:43.947 killing process with pid 128022 00:19:43.947 12:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128022' 00:19:43.947 12:03:49 -- common/autotest_common.sh@955 -- # kill 128022 00:19:43.947 [2024-11-29 12:03:49.263462] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:43.947 12:03:49 -- common/autotest_common.sh@960 -- # wait 128022 00:19:43.947 [2024-11-29 12:03:49.263578] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.947 [2024-11-29 12:03:49.263650] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.947 [2024-11-29 12:03:49.263662] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:43.947 [2024-11-29 12:03:49.304066] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:44.206 00:19:44.206 real 0m10.880s 00:19:44.206 user 0m19.908s 00:19:44.206 sys 0m1.395s 00:19:44.206 12:03:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:44.206 ************************************ 00:19:44.206 END TEST raid_superblock_test 00:19:44.206 ************************************ 00:19:44.206 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:44.206 12:03:49 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:44.206 12:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:44.206 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:19:44.206 ************************************ 00:19:44.206 START TEST raid_state_function_test 00:19:44.206 ************************************ 00:19:44.206 12:03:49 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=128336 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:44.206 Process raid pid: 128336 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128336' 00:19:44.206 12:03:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128336 /var/tmp/spdk-raid.sock 00:19:44.206 12:03:49 -- common/autotest_common.sh@829 -- # '[' -z 128336 ']' 00:19:44.206 12:03:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:44.206 12:03:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.206 12:03:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:44.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:44.206 12:03:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.206 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:19:44.206 [2024-11-29 12:03:49.675715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:44.206 [2024-11-29 12:03:49.676139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.464 [2024-11-29 12:03:49.820939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.464 [2024-11-29 12:03:49.920543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.723 [2024-11-29 12:03:49.978940] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.290 12:03:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.290 12:03:50 -- common/autotest_common.sh@862 -- # return 0 00:19:45.290 12:03:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:45.549 [2024-11-29 12:03:50.961676] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:45.549 [2024-11-29 12:03:50.961974] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:45.549 [2024-11-29 12:03:50.962094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:45.549 [2024-11-29 12:03:50.962158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:45.549 [2024-11-29 12:03:50.962256] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:45.549 [2024-11-29 12:03:50.962441] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.549 12:03:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.808 12:03:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.808 "name": "Existed_Raid", 00:19:45.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.808 "strip_size_kb": 0, 00:19:45.808 "state": "configuring", 00:19:45.808 "raid_level": "raid1", 00:19:45.808 "superblock": false, 00:19:45.808 "num_base_bdevs": 3, 00:19:45.808 "num_base_bdevs_discovered": 0, 00:19:45.808 "num_base_bdevs_operational": 3, 00:19:45.808 "base_bdevs_list": [ 00:19:45.808 { 00:19:45.808 "name": "BaseBdev1", 00:19:45.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.808 "is_configured": false, 00:19:45.808 "data_offset": 0, 00:19:45.808 "data_size": 0 00:19:45.808 }, 00:19:45.808 { 00:19:45.808 "name": "BaseBdev2", 00:19:45.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.808 "is_configured": false, 00:19:45.808 "data_offset": 0, 00:19:45.808 "data_size": 0 00:19:45.808 }, 00:19:45.808 { 00:19:45.808 "name": "BaseBdev3", 00:19:45.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.808 "is_configured": false, 00:19:45.808 "data_offset": 0, 00:19:45.808 "data_size": 0 00:19:45.808 } 00:19:45.808 ] 00:19:45.808 }' 00:19:45.808 12:03:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.808 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.376 12:03:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:46.634 [2024-11-29 12:03:52.133783] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:46.634 [2024-11-29 12:03:52.134125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:46.893 12:03:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:46.893 [2024-11-29 12:03:52.405886] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.893 [2024-11-29 12:03:52.406256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.893 [2024-11-29 12:03:52.406396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.893 [2024-11-29 12:03:52.406468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.151 [2024-11-29 12:03:52.406585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.151 [2024-11-29 12:03:52.406655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.151 12:03:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.410 [2024-11-29 12:03:52.702680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.410 BaseBdev1 00:19:47.410 12:03:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:47.410 12:03:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:47.410 12:03:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:47.410 12:03:52 -- common/autotest_common.sh@899 -- # local i 00:19:47.410 12:03:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:47.410 12:03:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:47.410 12:03:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.670 12:03:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.928 [ 00:19:47.928 { 00:19:47.928 "name": "BaseBdev1", 00:19:47.928 "aliases": [ 00:19:47.928 "ba3400ff-9399-4ad0-b3ce-38bc3a1bce65" 00:19:47.928 ], 00:19:47.928 "product_name": "Malloc disk", 00:19:47.928 "block_size": 512, 00:19:47.928 "num_blocks": 65536, 00:19:47.928 "uuid": "ba3400ff-9399-4ad0-b3ce-38bc3a1bce65", 00:19:47.928 "assigned_rate_limits": { 00:19:47.928 "rw_ios_per_sec": 0, 00:19:47.928 "rw_mbytes_per_sec": 0, 00:19:47.928 "r_mbytes_per_sec": 0, 00:19:47.928 "w_mbytes_per_sec": 0 00:19:47.928 }, 00:19:47.928 "claimed": true, 00:19:47.928 "claim_type": "exclusive_write", 00:19:47.928 "zoned": false, 00:19:47.928 "supported_io_types": { 00:19:47.928 "read": true, 00:19:47.928 "write": true, 00:19:47.928 "unmap": true, 00:19:47.928 "write_zeroes": true, 00:19:47.928 "flush": true, 00:19:47.928 "reset": true, 00:19:47.928 "compare": false, 00:19:47.928 "compare_and_write": false, 00:19:47.928 "abort": true, 00:19:47.928 "nvme_admin": false, 00:19:47.928 "nvme_io": false 00:19:47.928 }, 00:19:47.928 "memory_domains": [ 00:19:47.928 { 00:19:47.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.928 "dma_device_type": 2 00:19:47.928 } 00:19:47.928 ], 00:19:47.928 "driver_specific": {} 00:19:47.928 } 00:19:47.928 ] 00:19:47.928 12:03:53 -- common/autotest_common.sh@905 -- # return 0 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.928 12:03:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.929 12:03:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.929 12:03:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.929 12:03:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.929 12:03:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.187 12:03:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.187 "name": "Existed_Raid", 00:19:48.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.187 "strip_size_kb": 0, 00:19:48.187 "state": "configuring", 00:19:48.187 "raid_level": "raid1", 00:19:48.187 "superblock": false, 00:19:48.187 "num_base_bdevs": 3, 00:19:48.187 "num_base_bdevs_discovered": 1, 00:19:48.187 "num_base_bdevs_operational": 3, 00:19:48.187 "base_bdevs_list": [ 00:19:48.187 { 00:19:48.187 "name": "BaseBdev1", 00:19:48.187 "uuid": "ba3400ff-9399-4ad0-b3ce-38bc3a1bce65", 00:19:48.187 "is_configured": true, 00:19:48.187 "data_offset": 0, 00:19:48.187 "data_size": 65536 00:19:48.187 }, 00:19:48.187 { 00:19:48.187 "name": "BaseBdev2", 00:19:48.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.187 "is_configured": false, 00:19:48.187 "data_offset": 0, 00:19:48.187 "data_size": 0 00:19:48.187 }, 00:19:48.187 { 00:19:48.187 "name": "BaseBdev3", 00:19:48.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.187 "is_configured": false, 00:19:48.187 "data_offset": 0, 00:19:48.187 "data_size": 0 00:19:48.187 } 00:19:48.187 ] 00:19:48.187 }' 00:19:48.187 12:03:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.187 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:19:48.753 12:03:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:49.012 [2024-11-29 12:03:54.491187] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:49.012 [2024-11-29 12:03:54.491536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:49.012 12:03:54 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:49.012 12:03:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:49.270 [2024-11-29 12:03:54.775402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.270 [2024-11-29 12:03:54.777979] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:49.270 [2024-11-29 12:03:54.778214] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:49.270 [2024-11-29 12:03:54.778330] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:49.270 [2024-11-29 12:03:54.778426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.529 12:03:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.787 12:03:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.787 "name": "Existed_Raid", 00:19:49.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.787 "strip_size_kb": 0, 00:19:49.787 "state": "configuring", 00:19:49.787 "raid_level": "raid1", 00:19:49.787 "superblock": false, 00:19:49.787 "num_base_bdevs": 3, 00:19:49.787 "num_base_bdevs_discovered": 1, 00:19:49.787 "num_base_bdevs_operational": 3, 00:19:49.787 "base_bdevs_list": [ 00:19:49.787 { 00:19:49.787 "name": "BaseBdev1", 00:19:49.787 "uuid": "ba3400ff-9399-4ad0-b3ce-38bc3a1bce65", 00:19:49.787 "is_configured": true, 00:19:49.787 "data_offset": 0, 00:19:49.787 "data_size": 65536 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "name": "BaseBdev2", 00:19:49.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.787 "is_configured": false, 00:19:49.787 "data_offset": 0, 00:19:49.787 "data_size": 0 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "name": "BaseBdev3", 00:19:49.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.787 "is_configured": false, 00:19:49.787 "data_offset": 0, 00:19:49.787 "data_size": 0 00:19:49.787 } 00:19:49.787 ] 00:19:49.787 }' 00:19:49.787 12:03:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.787 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:19:50.354 12:03:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:50.612 [2024-11-29 12:03:55.914776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:50.612 BaseBdev2 00:19:50.612 12:03:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:50.612 12:03:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:50.612 12:03:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:50.612 12:03:55 -- common/autotest_common.sh@899 -- # local i 00:19:50.612 12:03:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:50.612 12:03:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:50.612 12:03:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.870 12:03:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:51.129 [ 00:19:51.129 { 00:19:51.129 "name": "BaseBdev2", 00:19:51.129 "aliases": [ 00:19:51.129 "ad4cc3d5-e766-4c9e-8274-38f50597273a" 00:19:51.129 ], 00:19:51.129 "product_name": "Malloc disk", 00:19:51.129 "block_size": 512, 00:19:51.129 "num_blocks": 65536, 00:19:51.129 "uuid": "ad4cc3d5-e766-4c9e-8274-38f50597273a", 00:19:51.129 "assigned_rate_limits": { 00:19:51.129 "rw_ios_per_sec": 0, 00:19:51.129 "rw_mbytes_per_sec": 0, 00:19:51.129 "r_mbytes_per_sec": 0, 00:19:51.129 "w_mbytes_per_sec": 0 00:19:51.129 }, 00:19:51.129 "claimed": true, 00:19:51.129 "claim_type": "exclusive_write", 00:19:51.129 "zoned": false, 00:19:51.129 "supported_io_types": { 00:19:51.129 "read": true, 00:19:51.129 "write": true, 00:19:51.129 "unmap": true, 00:19:51.129 "write_zeroes": true, 00:19:51.129 "flush": true, 00:19:51.129 "reset": true, 00:19:51.129 "compare": false, 00:19:51.129 "compare_and_write": false, 00:19:51.129 "abort": true, 00:19:51.129 "nvme_admin": false, 00:19:51.129 "nvme_io": false 00:19:51.129 }, 00:19:51.129 "memory_domains": [ 00:19:51.129 { 00:19:51.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.129 "dma_device_type": 2 00:19:51.129 } 00:19:51.129 ], 00:19:51.129 "driver_specific": {} 00:19:51.129 } 00:19:51.129 ] 00:19:51.129 12:03:56 -- common/autotest_common.sh@905 -- # return 0 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.129 12:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.387 12:03:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.387 "name": "Existed_Raid", 00:19:51.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.387 "strip_size_kb": 0, 00:19:51.387 "state": "configuring", 00:19:51.387 "raid_level": "raid1", 00:19:51.387 "superblock": false, 00:19:51.387 "num_base_bdevs": 3, 00:19:51.387 "num_base_bdevs_discovered": 2, 00:19:51.387 "num_base_bdevs_operational": 3, 00:19:51.387 "base_bdevs_list": [ 00:19:51.387 { 00:19:51.387 "name": "BaseBdev1", 00:19:51.387 "uuid": "ba3400ff-9399-4ad0-b3ce-38bc3a1bce65", 00:19:51.387 "is_configured": true, 00:19:51.387 "data_offset": 0, 00:19:51.387 "data_size": 65536 00:19:51.387 }, 00:19:51.387 { 00:19:51.387 "name": "BaseBdev2", 00:19:51.387 "uuid": "ad4cc3d5-e766-4c9e-8274-38f50597273a", 00:19:51.387 "is_configured": true, 00:19:51.387 "data_offset": 0, 00:19:51.387 "data_size": 65536 00:19:51.387 }, 00:19:51.387 { 00:19:51.387 "name": "BaseBdev3", 00:19:51.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.387 "is_configured": false, 00:19:51.387 "data_offset": 0, 00:19:51.387 "data_size": 0 00:19:51.387 } 00:19:51.387 ] 00:19:51.387 }' 00:19:51.387 12:03:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.387 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.953 12:03:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:52.211 [2024-11-29 12:03:57.652549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:52.211 [2024-11-29 12:03:57.653986] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:19:52.211 [2024-11-29 12:03:57.654041] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:52.211 [2024-11-29 12:03:57.654370] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:19:52.211 [2024-11-29 12:03:57.655014] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:19:52.211 [2024-11-29 12:03:57.655151] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:19:52.211 [2024-11-29 12:03:57.655533] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.211 BaseBdev3 00:19:52.211 12:03:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:52.211 12:03:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:52.211 12:03:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:52.211 12:03:57 -- common/autotest_common.sh@899 -- # local i 00:19:52.211 12:03:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:52.211 12:03:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:52.211 12:03:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:52.469 12:03:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:52.727 [ 00:19:52.727 { 00:19:52.727 "name": "BaseBdev3", 00:19:52.727 "aliases": [ 00:19:52.727 "d10002e2-9289-4f6b-8b3e-bbcd771c231a" 00:19:52.727 ], 00:19:52.727 "product_name": "Malloc disk", 00:19:52.727 "block_size": 512, 00:19:52.727 "num_blocks": 65536, 00:19:52.727 "uuid": "d10002e2-9289-4f6b-8b3e-bbcd771c231a", 00:19:52.727 "assigned_rate_limits": { 00:19:52.727 "rw_ios_per_sec": 0, 00:19:52.727 "rw_mbytes_per_sec": 0, 00:19:52.727 "r_mbytes_per_sec": 0, 00:19:52.727 "w_mbytes_per_sec": 0 00:19:52.727 }, 00:19:52.727 "claimed": true, 00:19:52.727 "claim_type": "exclusive_write", 00:19:52.727 "zoned": false, 00:19:52.727 "supported_io_types": { 00:19:52.727 "read": true, 00:19:52.727 "write": true, 00:19:52.727 "unmap": true, 00:19:52.727 "write_zeroes": true, 00:19:52.727 "flush": true, 00:19:52.727 "reset": true, 00:19:52.727 "compare": false, 00:19:52.727 "compare_and_write": false, 00:19:52.727 "abort": true, 00:19:52.727 "nvme_admin": false, 00:19:52.727 "nvme_io": false 00:19:52.727 }, 00:19:52.727 "memory_domains": [ 00:19:52.727 { 00:19:52.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.727 "dma_device_type": 2 00:19:52.727 } 00:19:52.727 ], 00:19:52.727 "driver_specific": {} 00:19:52.727 } 00:19:52.727 ] 00:19:52.727 12:03:58 -- common/autotest_common.sh@905 -- # return 0 00:19:52.727 12:03:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.728 12:03:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.986 12:03:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.986 "name": "Existed_Raid", 00:19:52.986 "uuid": "5da56397-00b7-464a-a1de-742120d9f2e4", 00:19:52.986 "strip_size_kb": 0, 00:19:52.986 "state": "online", 00:19:52.986 "raid_level": "raid1", 00:19:52.986 "superblock": false, 00:19:52.986 "num_base_bdevs": 3, 00:19:52.986 "num_base_bdevs_discovered": 3, 00:19:52.986 "num_base_bdevs_operational": 3, 00:19:52.986 "base_bdevs_list": [ 00:19:52.986 { 00:19:52.986 "name": "BaseBdev1", 00:19:52.986 "uuid": "ba3400ff-9399-4ad0-b3ce-38bc3a1bce65", 00:19:52.986 "is_configured": true, 00:19:52.986 "data_offset": 0, 00:19:52.986 "data_size": 65536 00:19:52.986 }, 00:19:52.986 { 00:19:52.986 "name": "BaseBdev2", 00:19:52.986 "uuid": "ad4cc3d5-e766-4c9e-8274-38f50597273a", 00:19:52.986 "is_configured": true, 00:19:52.986 "data_offset": 0, 00:19:52.986 "data_size": 65536 00:19:52.986 }, 00:19:52.986 { 00:19:52.986 "name": "BaseBdev3", 00:19:52.986 "uuid": "d10002e2-9289-4f6b-8b3e-bbcd771c231a", 00:19:52.986 "is_configured": true, 00:19:52.986 "data_offset": 0, 00:19:52.986 "data_size": 65536 00:19:52.986 } 00:19:52.986 ] 00:19:52.986 }' 00:19:52.986 12:03:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.986 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:53.920 [2024-11-29 12:03:59.371356] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.920 12:03:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.178 12:03:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.178 "name": "Existed_Raid", 00:19:54.178 "uuid": "5da56397-00b7-464a-a1de-742120d9f2e4", 00:19:54.178 "strip_size_kb": 0, 00:19:54.178 "state": "online", 00:19:54.178 "raid_level": "raid1", 00:19:54.178 "superblock": false, 00:19:54.178 "num_base_bdevs": 3, 00:19:54.178 "num_base_bdevs_discovered": 2, 00:19:54.178 "num_base_bdevs_operational": 2, 00:19:54.178 "base_bdevs_list": [ 00:19:54.178 { 00:19:54.178 "name": null, 00:19:54.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.178 "is_configured": false, 00:19:54.178 "data_offset": 0, 00:19:54.178 "data_size": 65536 00:19:54.178 }, 00:19:54.178 { 00:19:54.178 "name": "BaseBdev2", 00:19:54.178 "uuid": "ad4cc3d5-e766-4c9e-8274-38f50597273a", 00:19:54.178 "is_configured": true, 00:19:54.178 "data_offset": 0, 00:19:54.178 "data_size": 65536 00:19:54.178 }, 00:19:54.178 { 00:19:54.178 "name": "BaseBdev3", 00:19:54.178 "uuid": "d10002e2-9289-4f6b-8b3e-bbcd771c231a", 00:19:54.178 "is_configured": true, 00:19:54.178 "data_offset": 0, 00:19:54.178 "data_size": 65536 00:19:54.178 } 00:19:54.178 ] 00:19:54.178 }' 00:19:54.178 12:03:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.178 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:55.112 12:04:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:55.370 [2024-11-29 12:04:00.832550] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:55.370 12:04:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:55.370 12:04:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:55.370 12:04:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.370 12:04:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:55.936 12:04:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:55.937 12:04:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:55.937 12:04:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:55.937 [2024-11-29 12:04:01.397224] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:55.937 [2024-11-29 12:04:01.397451] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.937 [2024-11-29 12:04:01.397670] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.937 [2024-11-29 12:04:01.412661] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.937 [2024-11-29 12:04:01.412894] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:19:55.937 12:04:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:55.937 12:04:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:55.937 12:04:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.937 12:04:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:56.195 12:04:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:56.195 12:04:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:56.195 12:04:01 -- bdev/bdev_raid.sh@287 -- # killprocess 128336 00:19:56.195 12:04:01 -- common/autotest_common.sh@936 -- # '[' -z 128336 ']' 00:19:56.195 12:04:01 -- common/autotest_common.sh@940 -- # kill -0 128336 00:19:56.195 12:04:01 -- common/autotest_common.sh@941 -- # uname 00:19:56.195 12:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.195 12:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128336 00:19:56.195 killing process with pid 128336 00:19:56.195 12:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:56.195 12:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:56.195 12:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128336' 00:19:56.195 12:04:01 -- common/autotest_common.sh@955 -- # kill 128336 00:19:56.195 12:04:01 -- common/autotest_common.sh@960 -- # wait 128336 00:19:56.195 [2024-11-29 12:04:01.701017] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:56.195 [2024-11-29 12:04:01.701172] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:56.763 00:19:56.763 real 0m12.422s 00:19:56.763 user 0m22.855s 00:19:56.763 sys 0m1.454s 00:19:56.763 12:04:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.763 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:56.763 ************************************ 00:19:56.763 END TEST raid_state_function_test 00:19:56.763 ************************************ 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:56.763 12:04:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:56.763 12:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.763 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:56.763 ************************************ 00:19:56.763 START TEST raid_state_function_test_sb 00:19:56.763 ************************************ 00:19:56.763 12:04:02 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=128721 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128721' 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:56.763 Process raid pid: 128721 00:19:56.763 12:04:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128721 /var/tmp/spdk-raid.sock 00:19:56.763 12:04:02 -- common/autotest_common.sh@829 -- # '[' -z 128721 ']' 00:19:56.763 12:04:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:56.763 12:04:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.763 12:04:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:56.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:56.763 12:04:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.763 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:56.763 [2024-11-29 12:04:02.173360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:56.763 [2024-11-29 12:04:02.173974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.033 [2024-11-29 12:04:02.321628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.033 [2024-11-29 12:04:02.419742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.033 [2024-11-29 12:04:02.476575] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.984 12:04:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.984 12:04:03 -- common/autotest_common.sh@862 -- # return 0 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:57.984 [2024-11-29 12:04:03.434529] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.984 [2024-11-29 12:04:03.434923] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.984 [2024-11-29 12:04:03.435047] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.984 [2024-11-29 12:04:03.435112] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.984 [2024-11-29 12:04:03.435339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.984 [2024-11-29 12:04:03.435437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.984 12:04:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.243 12:04:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.243 "name": "Existed_Raid", 00:19:58.243 "uuid": "9bb6e9fb-e55f-4828-8bee-4f8e855288c8", 00:19:58.243 "strip_size_kb": 0, 00:19:58.243 "state": "configuring", 00:19:58.243 "raid_level": "raid1", 00:19:58.243 "superblock": true, 00:19:58.243 "num_base_bdevs": 3, 00:19:58.243 "num_base_bdevs_discovered": 0, 00:19:58.243 "num_base_bdevs_operational": 3, 00:19:58.243 "base_bdevs_list": [ 00:19:58.243 { 00:19:58.243 "name": "BaseBdev1", 00:19:58.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.243 "is_configured": false, 00:19:58.243 "data_offset": 0, 00:19:58.243 "data_size": 0 00:19:58.243 }, 00:19:58.243 { 00:19:58.243 "name": "BaseBdev2", 00:19:58.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.243 "is_configured": false, 00:19:58.243 "data_offset": 0, 00:19:58.243 "data_size": 0 00:19:58.243 }, 00:19:58.243 { 00:19:58.243 "name": "BaseBdev3", 00:19:58.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.243 "is_configured": false, 00:19:58.243 "data_offset": 0, 00:19:58.243 "data_size": 0 00:19:58.243 } 00:19:58.243 ] 00:19:58.243 }' 00:19:58.243 12:04:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.243 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:19:59.179 12:04:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:59.179 [2024-11-29 12:04:04.574636] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.179 [2024-11-29 12:04:04.574982] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:59.179 12:04:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:59.438 [2024-11-29 12:04:04.810749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.438 [2024-11-29 12:04:04.811027] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.438 [2024-11-29 12:04:04.811148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.438 [2024-11-29 12:04:04.811217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.438 [2024-11-29 12:04:04.811423] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:59.438 [2024-11-29 12:04:04.811496] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:59.438 12:04:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:59.697 [2024-11-29 12:04:05.050801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.697 BaseBdev1 00:19:59.697 12:04:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:59.697 12:04:05 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:59.697 12:04:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:59.697 12:04:05 -- common/autotest_common.sh@899 -- # local i 00:19:59.697 12:04:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:59.697 12:04:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:59.697 12:04:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.956 12:04:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:00.214 [ 00:20:00.214 { 00:20:00.214 "name": "BaseBdev1", 00:20:00.214 "aliases": [ 00:20:00.214 "2ac14448-b4a6-4900-9577-fdf6e4affee1" 00:20:00.214 ], 00:20:00.214 "product_name": "Malloc disk", 00:20:00.214 "block_size": 512, 00:20:00.214 "num_blocks": 65536, 00:20:00.214 "uuid": "2ac14448-b4a6-4900-9577-fdf6e4affee1", 00:20:00.214 "assigned_rate_limits": { 00:20:00.214 "rw_ios_per_sec": 0, 00:20:00.214 "rw_mbytes_per_sec": 0, 00:20:00.214 "r_mbytes_per_sec": 0, 00:20:00.214 "w_mbytes_per_sec": 0 00:20:00.214 }, 00:20:00.214 "claimed": true, 00:20:00.214 "claim_type": "exclusive_write", 00:20:00.214 "zoned": false, 00:20:00.214 "supported_io_types": { 00:20:00.214 "read": true, 00:20:00.214 "write": true, 00:20:00.214 "unmap": true, 00:20:00.214 "write_zeroes": true, 00:20:00.214 "flush": true, 00:20:00.214 "reset": true, 00:20:00.214 "compare": false, 00:20:00.214 "compare_and_write": false, 00:20:00.214 "abort": true, 00:20:00.214 "nvme_admin": false, 00:20:00.214 "nvme_io": false 00:20:00.214 }, 00:20:00.214 "memory_domains": [ 00:20:00.214 { 00:20:00.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.214 "dma_device_type": 2 00:20:00.214 } 00:20:00.214 ], 00:20:00.214 "driver_specific": {} 00:20:00.214 } 00:20:00.214 ] 00:20:00.214 12:04:05 -- common/autotest_common.sh@905 -- # return 0 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.214 12:04:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.471 12:04:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.471 "name": "Existed_Raid", 00:20:00.471 "uuid": "eb8dc6bb-a68c-4465-a59c-c7a4e24c6a0c", 00:20:00.471 "strip_size_kb": 0, 00:20:00.471 "state": "configuring", 00:20:00.471 "raid_level": "raid1", 00:20:00.471 "superblock": true, 00:20:00.471 "num_base_bdevs": 3, 00:20:00.471 "num_base_bdevs_discovered": 1, 00:20:00.471 "num_base_bdevs_operational": 3, 00:20:00.471 "base_bdevs_list": [ 00:20:00.471 { 00:20:00.471 "name": "BaseBdev1", 00:20:00.471 "uuid": "2ac14448-b4a6-4900-9577-fdf6e4affee1", 00:20:00.471 "is_configured": true, 00:20:00.471 "data_offset": 2048, 00:20:00.471 "data_size": 63488 00:20:00.471 }, 00:20:00.471 { 00:20:00.471 "name": "BaseBdev2", 00:20:00.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.471 "is_configured": false, 00:20:00.471 "data_offset": 0, 00:20:00.471 "data_size": 0 00:20:00.471 }, 00:20:00.471 { 00:20:00.471 "name": "BaseBdev3", 00:20:00.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.471 "is_configured": false, 00:20:00.471 "data_offset": 0, 00:20:00.471 "data_size": 0 00:20:00.471 } 00:20:00.471 ] 00:20:00.471 }' 00:20:00.471 12:04:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.471 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:20:01.038 12:04:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:01.297 [2024-11-29 12:04:06.691205] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.297 [2024-11-29 12:04:06.691518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:01.297 12:04:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:01.297 12:04:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:01.557 12:04:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:01.815 BaseBdev1 00:20:01.816 12:04:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:01.816 12:04:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:01.816 12:04:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:01.816 12:04:07 -- common/autotest_common.sh@899 -- # local i 00:20:01.816 12:04:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:01.816 12:04:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:01.816 12:04:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:02.073 12:04:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:02.331 [ 00:20:02.331 { 00:20:02.331 "name": "BaseBdev1", 00:20:02.331 "aliases": [ 00:20:02.331 "46056142-7147-48e6-972a-e82f3527818b" 00:20:02.331 ], 00:20:02.331 "product_name": "Malloc disk", 00:20:02.331 "block_size": 512, 00:20:02.331 "num_blocks": 65536, 00:20:02.331 "uuid": "46056142-7147-48e6-972a-e82f3527818b", 00:20:02.331 "assigned_rate_limits": { 00:20:02.331 "rw_ios_per_sec": 0, 00:20:02.331 "rw_mbytes_per_sec": 0, 00:20:02.331 "r_mbytes_per_sec": 0, 00:20:02.331 "w_mbytes_per_sec": 0 00:20:02.331 }, 00:20:02.331 "claimed": false, 00:20:02.331 "zoned": false, 00:20:02.331 "supported_io_types": { 00:20:02.331 "read": true, 00:20:02.331 "write": true, 00:20:02.331 "unmap": true, 00:20:02.331 "write_zeroes": true, 00:20:02.331 "flush": true, 00:20:02.331 "reset": true, 00:20:02.331 "compare": false, 00:20:02.331 "compare_and_write": false, 00:20:02.331 "abort": true, 00:20:02.331 "nvme_admin": false, 00:20:02.331 "nvme_io": false 00:20:02.331 }, 00:20:02.331 "memory_domains": [ 00:20:02.331 { 00:20:02.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.331 "dma_device_type": 2 00:20:02.331 } 00:20:02.331 ], 00:20:02.331 "driver_specific": {} 00:20:02.331 } 00:20:02.331 ] 00:20:02.331 12:04:07 -- common/autotest_common.sh@905 -- # return 0 00:20:02.331 12:04:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:02.590 [2024-11-29 12:04:08.076622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.590 [2024-11-29 12:04:08.079225] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.590 [2024-11-29 12:04:08.079461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.590 [2024-11-29 12:04:08.079577] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:02.590 [2024-11-29 12:04:08.079647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.590 12:04:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.848 12:04:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.848 "name": "Existed_Raid", 00:20:02.848 "uuid": "9b028acb-2876-4bb0-82fe-1100fb7df424", 00:20:02.848 "strip_size_kb": 0, 00:20:02.848 "state": "configuring", 00:20:02.848 "raid_level": "raid1", 00:20:02.848 "superblock": true, 00:20:02.848 "num_base_bdevs": 3, 00:20:02.848 "num_base_bdevs_discovered": 1, 00:20:02.848 "num_base_bdevs_operational": 3, 00:20:02.848 "base_bdevs_list": [ 00:20:02.848 { 00:20:02.848 "name": "BaseBdev1", 00:20:02.848 "uuid": "46056142-7147-48e6-972a-e82f3527818b", 00:20:02.848 "is_configured": true, 00:20:02.848 "data_offset": 2048, 00:20:02.848 "data_size": 63488 00:20:02.848 }, 00:20:02.848 { 00:20:02.848 "name": "BaseBdev2", 00:20:02.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.848 "is_configured": false, 00:20:02.848 "data_offset": 0, 00:20:02.848 "data_size": 0 00:20:02.848 }, 00:20:02.848 { 00:20:02.848 "name": "BaseBdev3", 00:20:02.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.848 "is_configured": false, 00:20:02.848 "data_offset": 0, 00:20:02.848 "data_size": 0 00:20:02.848 } 00:20:02.848 ] 00:20:02.848 }' 00:20:02.848 12:04:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.848 12:04:08 -- common/autotest_common.sh@10 -- # set +x 00:20:03.829 12:04:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:03.829 [2024-11-29 12:04:09.310142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.829 BaseBdev2 00:20:04.092 12:04:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:04.092 12:04:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:04.092 12:04:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:04.092 12:04:09 -- common/autotest_common.sh@899 -- # local i 00:20:04.092 12:04:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:04.092 12:04:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:04.092 12:04:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:04.092 12:04:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:04.350 [ 00:20:04.350 { 00:20:04.350 "name": "BaseBdev2", 00:20:04.350 "aliases": [ 00:20:04.350 "5885626e-da5c-41fe-aaad-bf05cfb880af" 00:20:04.350 ], 00:20:04.350 "product_name": "Malloc disk", 00:20:04.350 "block_size": 512, 00:20:04.350 "num_blocks": 65536, 00:20:04.350 "uuid": "5885626e-da5c-41fe-aaad-bf05cfb880af", 00:20:04.350 "assigned_rate_limits": { 00:20:04.350 "rw_ios_per_sec": 0, 00:20:04.350 "rw_mbytes_per_sec": 0, 00:20:04.350 "r_mbytes_per_sec": 0, 00:20:04.350 "w_mbytes_per_sec": 0 00:20:04.350 }, 00:20:04.350 "claimed": true, 00:20:04.350 "claim_type": "exclusive_write", 00:20:04.350 "zoned": false, 00:20:04.350 "supported_io_types": { 00:20:04.350 "read": true, 00:20:04.350 "write": true, 00:20:04.350 "unmap": true, 00:20:04.350 "write_zeroes": true, 00:20:04.350 "flush": true, 00:20:04.350 "reset": true, 00:20:04.350 "compare": false, 00:20:04.350 "compare_and_write": false, 00:20:04.350 "abort": true, 00:20:04.350 "nvme_admin": false, 00:20:04.350 "nvme_io": false 00:20:04.350 }, 00:20:04.350 "memory_domains": [ 00:20:04.350 { 00:20:04.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.351 "dma_device_type": 2 00:20:04.351 } 00:20:04.351 ], 00:20:04.351 "driver_specific": {} 00:20:04.351 } 00:20:04.351 ] 00:20:04.351 12:04:09 -- common/autotest_common.sh@905 -- # return 0 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.351 12:04:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.609 12:04:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.609 "name": "Existed_Raid", 00:20:04.609 "uuid": "9b028acb-2876-4bb0-82fe-1100fb7df424", 00:20:04.609 "strip_size_kb": 0, 00:20:04.609 "state": "configuring", 00:20:04.609 "raid_level": "raid1", 00:20:04.609 "superblock": true, 00:20:04.609 "num_base_bdevs": 3, 00:20:04.609 "num_base_bdevs_discovered": 2, 00:20:04.609 "num_base_bdevs_operational": 3, 00:20:04.609 "base_bdevs_list": [ 00:20:04.609 { 00:20:04.609 "name": "BaseBdev1", 00:20:04.609 "uuid": "46056142-7147-48e6-972a-e82f3527818b", 00:20:04.609 "is_configured": true, 00:20:04.609 "data_offset": 2048, 00:20:04.609 "data_size": 63488 00:20:04.609 }, 00:20:04.609 { 00:20:04.609 "name": "BaseBdev2", 00:20:04.609 "uuid": "5885626e-da5c-41fe-aaad-bf05cfb880af", 00:20:04.609 "is_configured": true, 00:20:04.609 "data_offset": 2048, 00:20:04.609 "data_size": 63488 00:20:04.609 }, 00:20:04.609 { 00:20:04.609 "name": "BaseBdev3", 00:20:04.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.609 "is_configured": false, 00:20:04.609 "data_offset": 0, 00:20:04.609 "data_size": 0 00:20:04.609 } 00:20:04.609 ] 00:20:04.609 }' 00:20:04.609 12:04:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.609 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:05.544 12:04:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:05.544 [2024-11-29 12:04:11.040093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:05.544 [2024-11-29 12:04:11.040658] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:20:05.544 [2024-11-29 12:04:11.040801] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:05.544 [2024-11-29 12:04:11.041011] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:20:05.544 [2024-11-29 12:04:11.041579] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:20:05.544 [2024-11-29 12:04:11.041707] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:20:05.544 [2024-11-29 12:04:11.042032] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.544 BaseBdev3 00:20:05.803 12:04:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:05.803 12:04:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:05.803 12:04:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:05.803 12:04:11 -- common/autotest_common.sh@899 -- # local i 00:20:05.803 12:04:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:05.803 12:04:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:05.803 12:04:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.062 12:04:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:06.062 [ 00:20:06.062 { 00:20:06.062 "name": "BaseBdev3", 00:20:06.062 "aliases": [ 00:20:06.062 "684a1c2f-5d78-4205-8d73-2d744d6c97e1" 00:20:06.062 ], 00:20:06.062 "product_name": "Malloc disk", 00:20:06.062 "block_size": 512, 00:20:06.062 "num_blocks": 65536, 00:20:06.062 "uuid": "684a1c2f-5d78-4205-8d73-2d744d6c97e1", 00:20:06.062 "assigned_rate_limits": { 00:20:06.062 "rw_ios_per_sec": 0, 00:20:06.062 "rw_mbytes_per_sec": 0, 00:20:06.062 "r_mbytes_per_sec": 0, 00:20:06.062 "w_mbytes_per_sec": 0 00:20:06.062 }, 00:20:06.062 "claimed": true, 00:20:06.062 "claim_type": "exclusive_write", 00:20:06.062 "zoned": false, 00:20:06.062 "supported_io_types": { 00:20:06.062 "read": true, 00:20:06.062 "write": true, 00:20:06.062 "unmap": true, 00:20:06.062 "write_zeroes": true, 00:20:06.062 "flush": true, 00:20:06.062 "reset": true, 00:20:06.062 "compare": false, 00:20:06.062 "compare_and_write": false, 00:20:06.062 "abort": true, 00:20:06.062 "nvme_admin": false, 00:20:06.062 "nvme_io": false 00:20:06.062 }, 00:20:06.062 "memory_domains": [ 00:20:06.062 { 00:20:06.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.062 "dma_device_type": 2 00:20:06.062 } 00:20:06.062 ], 00:20:06.062 "driver_specific": {} 00:20:06.062 } 00:20:06.062 ] 00:20:06.062 12:04:11 -- common/autotest_common.sh@905 -- # return 0 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.062 12:04:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.063 12:04:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.321 12:04:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.321 "name": "Existed_Raid", 00:20:06.321 "uuid": "9b028acb-2876-4bb0-82fe-1100fb7df424", 00:20:06.321 "strip_size_kb": 0, 00:20:06.321 "state": "online", 00:20:06.321 "raid_level": "raid1", 00:20:06.321 "superblock": true, 00:20:06.321 "num_base_bdevs": 3, 00:20:06.321 "num_base_bdevs_discovered": 3, 00:20:06.321 "num_base_bdevs_operational": 3, 00:20:06.321 "base_bdevs_list": [ 00:20:06.321 { 00:20:06.321 "name": "BaseBdev1", 00:20:06.321 "uuid": "46056142-7147-48e6-972a-e82f3527818b", 00:20:06.321 "is_configured": true, 00:20:06.321 "data_offset": 2048, 00:20:06.321 "data_size": 63488 00:20:06.321 }, 00:20:06.321 { 00:20:06.321 "name": "BaseBdev2", 00:20:06.321 "uuid": "5885626e-da5c-41fe-aaad-bf05cfb880af", 00:20:06.321 "is_configured": true, 00:20:06.321 "data_offset": 2048, 00:20:06.321 "data_size": 63488 00:20:06.321 }, 00:20:06.321 { 00:20:06.321 "name": "BaseBdev3", 00:20:06.321 "uuid": "684a1c2f-5d78-4205-8d73-2d744d6c97e1", 00:20:06.321 "is_configured": true, 00:20:06.321 "data_offset": 2048, 00:20:06.321 "data_size": 63488 00:20:06.321 } 00:20:06.321 ] 00:20:06.321 }' 00:20:06.321 12:04:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.321 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:07.255 [2024-11-29 12:04:12.640642] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.255 12:04:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.513 12:04:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.513 "name": "Existed_Raid", 00:20:07.513 "uuid": "9b028acb-2876-4bb0-82fe-1100fb7df424", 00:20:07.513 "strip_size_kb": 0, 00:20:07.513 "state": "online", 00:20:07.513 "raid_level": "raid1", 00:20:07.513 "superblock": true, 00:20:07.513 "num_base_bdevs": 3, 00:20:07.513 "num_base_bdevs_discovered": 2, 00:20:07.513 "num_base_bdevs_operational": 2, 00:20:07.513 "base_bdevs_list": [ 00:20:07.513 { 00:20:07.513 "name": null, 00:20:07.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.513 "is_configured": false, 00:20:07.513 "data_offset": 2048, 00:20:07.513 "data_size": 63488 00:20:07.513 }, 00:20:07.513 { 00:20:07.513 "name": "BaseBdev2", 00:20:07.513 "uuid": "5885626e-da5c-41fe-aaad-bf05cfb880af", 00:20:07.513 "is_configured": true, 00:20:07.513 "data_offset": 2048, 00:20:07.513 "data_size": 63488 00:20:07.513 }, 00:20:07.513 { 00:20:07.513 "name": "BaseBdev3", 00:20:07.513 "uuid": "684a1c2f-5d78-4205-8d73-2d744d6c97e1", 00:20:07.513 "is_configured": true, 00:20:07.513 "data_offset": 2048, 00:20:07.513 "data_size": 63488 00:20:07.513 } 00:20:07.513 ] 00:20:07.513 }' 00:20:07.513 12:04:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.513 12:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:08.078 12:04:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:08.078 12:04:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:08.078 12:04:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:08.078 12:04:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.337 12:04:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:08.337 12:04:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:08.337 12:04:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:08.594 [2024-11-29 12:04:14.062822] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:08.595 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:08.595 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:08.595 12:04:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.595 12:04:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:09.159 12:04:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:09.159 12:04:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:09.159 12:04:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:09.417 [2024-11-29 12:04:14.681811] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:09.417 [2024-11-29 12:04:14.682128] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.417 [2024-11-29 12:04:14.682335] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.417 [2024-11-29 12:04:14.696975] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.417 [2024-11-29 12:04:14.697267] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:20:09.417 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:09.417 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:09.417 12:04:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.417 12:04:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:09.675 12:04:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:09.675 12:04:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:09.675 12:04:15 -- bdev/bdev_raid.sh@287 -- # killprocess 128721 00:20:09.675 12:04:15 -- common/autotest_common.sh@936 -- # '[' -z 128721 ']' 00:20:09.675 12:04:15 -- common/autotest_common.sh@940 -- # kill -0 128721 00:20:09.675 12:04:15 -- common/autotest_common.sh@941 -- # uname 00:20:09.675 12:04:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.675 12:04:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128721 00:20:09.675 killing process with pid 128721 00:20:09.675 12:04:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:09.675 12:04:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:09.675 12:04:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128721' 00:20:09.675 12:04:15 -- common/autotest_common.sh@955 -- # kill 128721 00:20:09.675 [2024-11-29 12:04:15.026957] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.675 12:04:15 -- common/autotest_common.sh@960 -- # wait 128721 00:20:09.675 [2024-11-29 12:04:15.027040] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.933 ************************************ 00:20:09.933 END TEST raid_state_function_test_sb 00:20:09.933 ************************************ 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:09.933 00:20:09.933 real 0m13.167s 00:20:09.933 user 0m24.054s 00:20:09.933 sys 0m1.833s 00:20:09.933 12:04:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:09.933 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:09.933 12:04:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:09.933 12:04:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.933 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.933 ************************************ 00:20:09.933 START TEST raid_superblock_test 00:20:09.933 ************************************ 00:20:09.933 12:04:15 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=129118 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:09.933 12:04:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129118 /var/tmp/spdk-raid.sock 00:20:09.933 12:04:15 -- common/autotest_common.sh@829 -- # '[' -z 129118 ']' 00:20:09.933 12:04:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:09.933 12:04:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.933 12:04:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:09.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:09.933 12:04:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.933 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.933 [2024-11-29 12:04:15.376374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:09.933 [2024-11-29 12:04:15.376804] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129118 ] 00:20:10.191 [2024-11-29 12:04:15.518370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.191 [2024-11-29 12:04:15.614153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.191 [2024-11-29 12:04:15.668947] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:11.124 12:04:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.124 12:04:16 -- common/autotest_common.sh@862 -- # return 0 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:11.124 malloc1 00:20:11.124 12:04:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:11.691 [2024-11-29 12:04:16.907234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:11.691 [2024-11-29 12:04:16.907633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.691 [2024-11-29 12:04:16.907821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:11.691 [2024-11-29 12:04:16.908015] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.691 [2024-11-29 12:04:16.910986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.691 [2024-11-29 12:04:16.911188] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:11.691 pt1 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:11.691 12:04:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:11.691 malloc2 00:20:11.691 12:04:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.949 [2024-11-29 12:04:17.386954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.949 [2024-11-29 12:04:17.387287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.949 [2024-11-29 12:04:17.387493] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:11.949 [2024-11-29 12:04:17.387659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.949 [2024-11-29 12:04:17.390458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.949 [2024-11-29 12:04:17.390648] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.949 pt2 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:11.949 12:04:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:12.207 malloc3 00:20:12.207 12:04:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:12.466 [2024-11-29 12:04:17.888076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:12.466 [2024-11-29 12:04:17.888459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.466 [2024-11-29 12:04:17.888635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:12.466 [2024-11-29 12:04:17.888809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.466 [2024-11-29 12:04:17.891497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.466 [2024-11-29 12:04:17.891687] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:12.466 pt3 00:20:12.466 12:04:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:12.466 12:04:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:12.466 12:04:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:12.726 [2024-11-29 12:04:18.136365] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:12.726 [2024-11-29 12:04:18.138988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:12.726 [2024-11-29 12:04:18.139224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:12.726 [2024-11-29 12:04:18.139628] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:12.726 [2024-11-29 12:04:18.139759] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:12.726 [2024-11-29 12:04:18.139996] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:12.726 [2024-11-29 12:04:18.140572] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:12.726 [2024-11-29 12:04:18.140697] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:20:12.726 [2024-11-29 12:04:18.141012] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.726 12:04:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.984 12:04:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.985 "name": "raid_bdev1", 00:20:12.985 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:12.985 "strip_size_kb": 0, 00:20:12.985 "state": "online", 00:20:12.985 "raid_level": "raid1", 00:20:12.985 "superblock": true, 00:20:12.985 "num_base_bdevs": 3, 00:20:12.985 "num_base_bdevs_discovered": 3, 00:20:12.985 "num_base_bdevs_operational": 3, 00:20:12.985 "base_bdevs_list": [ 00:20:12.985 { 00:20:12.985 "name": "pt1", 00:20:12.985 "uuid": "c597d8fe-1b77-52f9-a80c-970894c8219c", 00:20:12.985 "is_configured": true, 00:20:12.985 "data_offset": 2048, 00:20:12.985 "data_size": 63488 00:20:12.985 }, 00:20:12.985 { 00:20:12.985 "name": "pt2", 00:20:12.985 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:12.985 "is_configured": true, 00:20:12.985 "data_offset": 2048, 00:20:12.985 "data_size": 63488 00:20:12.985 }, 00:20:12.985 { 00:20:12.985 "name": "pt3", 00:20:12.985 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:12.985 "is_configured": true, 00:20:12.985 "data_offset": 2048, 00:20:12.985 "data_size": 63488 00:20:12.985 } 00:20:12.985 ] 00:20:12.985 }' 00:20:12.985 12:04:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.985 12:04:18 -- common/autotest_common.sh@10 -- # set +x 00:20:13.919 12:04:19 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:13.919 12:04:19 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:13.919 [2024-11-29 12:04:19.329527] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.919 12:04:19 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=21b0c3a9-c8ca-4aad-b49c-a8395aa600d8 00:20:13.919 12:04:19 -- bdev/bdev_raid.sh@380 -- # '[' -z 21b0c3a9-c8ca-4aad-b49c-a8395aa600d8 ']' 00:20:13.919 12:04:19 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:14.178 [2024-11-29 12:04:19.597285] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.178 [2024-11-29 12:04:19.597582] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.178 [2024-11-29 12:04:19.597797] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.178 [2024-11-29 12:04:19.598029] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.178 [2024-11-29 12:04:19.598154] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:20:14.178 12:04:19 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.178 12:04:19 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:14.436 12:04:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:14.436 12:04:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:14.436 12:04:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:14.436 12:04:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:14.719 12:04:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:14.719 12:04:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:14.989 12:04:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:14.989 12:04:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:15.247 12:04:20 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:15.247 12:04:20 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:15.505 12:04:20 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:15.505 12:04:20 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:15.505 12:04:20 -- common/autotest_common.sh@650 -- # local es=0 00:20:15.505 12:04:20 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:15.505 12:04:20 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.505 12:04:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.505 12:04:20 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.505 12:04:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.505 12:04:20 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.505 12:04:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.505 12:04:20 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.505 12:04:20 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:15.505 12:04:20 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:15.762 [2024-11-29 12:04:21.089582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:15.762 [2024-11-29 12:04:21.091836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:15.763 [2024-11-29 12:04:21.091897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:15.763 [2024-11-29 12:04:21.091955] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:15.763 [2024-11-29 12:04:21.092056] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:15.763 [2024-11-29 12:04:21.092097] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:15.763 [2024-11-29 12:04:21.092149] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.763 [2024-11-29 12:04:21.092163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:20:15.763 request: 00:20:15.763 { 00:20:15.763 "name": "raid_bdev1", 00:20:15.763 "raid_level": "raid1", 00:20:15.763 "base_bdevs": [ 00:20:15.763 "malloc1", 00:20:15.763 "malloc2", 00:20:15.763 "malloc3" 00:20:15.763 ], 00:20:15.763 "superblock": false, 00:20:15.763 "method": "bdev_raid_create", 00:20:15.763 "req_id": 1 00:20:15.763 } 00:20:15.763 Got JSON-RPC error response 00:20:15.763 response: 00:20:15.763 { 00:20:15.763 "code": -17, 00:20:15.763 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:15.763 } 00:20:15.763 12:04:21 -- common/autotest_common.sh@653 -- # es=1 00:20:15.763 12:04:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.763 12:04:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.763 12:04:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.763 12:04:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:15.763 12:04:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.020 12:04:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:16.020 12:04:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:16.020 12:04:21 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.278 [2024-11-29 12:04:21.589671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.278 [2024-11-29 12:04:21.589791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.278 [2024-11-29 12:04:21.589838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:16.278 [2024-11-29 12:04:21.589869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.278 [2024-11-29 12:04:21.592448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.278 [2024-11-29 12:04:21.592507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.278 [2024-11-29 12:04:21.592626] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:16.278 [2024-11-29 12:04:21.592682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.278 pt1 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:16.278 12:04:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.279 12:04:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.279 12:04:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.279 12:04:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.279 12:04:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.279 12:04:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.535 12:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.535 "name": "raid_bdev1", 00:20:16.535 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:16.535 "strip_size_kb": 0, 00:20:16.535 "state": "configuring", 00:20:16.535 "raid_level": "raid1", 00:20:16.535 "superblock": true, 00:20:16.535 "num_base_bdevs": 3, 00:20:16.535 "num_base_bdevs_discovered": 1, 00:20:16.535 "num_base_bdevs_operational": 3, 00:20:16.535 "base_bdevs_list": [ 00:20:16.535 { 00:20:16.535 "name": "pt1", 00:20:16.535 "uuid": "c597d8fe-1b77-52f9-a80c-970894c8219c", 00:20:16.535 "is_configured": true, 00:20:16.535 "data_offset": 2048, 00:20:16.535 "data_size": 63488 00:20:16.535 }, 00:20:16.535 { 00:20:16.535 "name": null, 00:20:16.535 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:16.535 "is_configured": false, 00:20:16.535 "data_offset": 2048, 00:20:16.535 "data_size": 63488 00:20:16.535 }, 00:20:16.535 { 00:20:16.535 "name": null, 00:20:16.535 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:16.535 "is_configured": false, 00:20:16.535 "data_offset": 2048, 00:20:16.535 "data_size": 63488 00:20:16.535 } 00:20:16.535 ] 00:20:16.535 }' 00:20:16.535 12:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.535 12:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:17.100 12:04:22 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:20:17.100 12:04:22 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.358 [2024-11-29 12:04:22.857950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.358 [2024-11-29 12:04:22.858117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.358 [2024-11-29 12:04:22.858168] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:17.358 [2024-11-29 12:04:22.858211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.358 [2024-11-29 12:04:22.858715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.358 [2024-11-29 12:04:22.858769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.358 [2024-11-29 12:04:22.858882] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:17.358 [2024-11-29 12:04:22.858909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.358 pt2 00:20:17.616 12:04:22 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:17.875 [2024-11-29 12:04:23.138031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.875 12:04:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.134 12:04:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.134 "name": "raid_bdev1", 00:20:18.134 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:18.134 "strip_size_kb": 0, 00:20:18.134 "state": "configuring", 00:20:18.134 "raid_level": "raid1", 00:20:18.134 "superblock": true, 00:20:18.134 "num_base_bdevs": 3, 00:20:18.134 "num_base_bdevs_discovered": 1, 00:20:18.134 "num_base_bdevs_operational": 3, 00:20:18.134 "base_bdevs_list": [ 00:20:18.134 { 00:20:18.134 "name": "pt1", 00:20:18.134 "uuid": "c597d8fe-1b77-52f9-a80c-970894c8219c", 00:20:18.134 "is_configured": true, 00:20:18.134 "data_offset": 2048, 00:20:18.134 "data_size": 63488 00:20:18.134 }, 00:20:18.134 { 00:20:18.134 "name": null, 00:20:18.134 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:18.134 "is_configured": false, 00:20:18.134 "data_offset": 2048, 00:20:18.134 "data_size": 63488 00:20:18.134 }, 00:20:18.134 { 00:20:18.134 "name": null, 00:20:18.134 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:18.134 "is_configured": false, 00:20:18.134 "data_offset": 2048, 00:20:18.134 "data_size": 63488 00:20:18.134 } 00:20:18.134 ] 00:20:18.134 }' 00:20:18.134 12:04:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.134 12:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:18.701 12:04:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:18.701 12:04:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:18.701 12:04:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:18.960 [2024-11-29 12:04:24.346259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:18.960 [2024-11-29 12:04:24.346406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.960 [2024-11-29 12:04:24.346450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:18.960 [2024-11-29 12:04:24.346484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.960 [2024-11-29 12:04:24.346971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.960 [2024-11-29 12:04:24.347022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:18.960 [2024-11-29 12:04:24.347129] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:18.960 [2024-11-29 12:04:24.347156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:18.960 pt2 00:20:18.960 12:04:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:18.960 12:04:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:18.960 12:04:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:19.218 [2024-11-29 12:04:24.578378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:19.218 [2024-11-29 12:04:24.578501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.218 [2024-11-29 12:04:24.578545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:19.218 [2024-11-29 12:04:24.578577] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.218 [2024-11-29 12:04:24.579081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.218 [2024-11-29 12:04:24.579139] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:19.218 [2024-11-29 12:04:24.579254] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:19.218 [2024-11-29 12:04:24.579282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:19.218 [2024-11-29 12:04:24.579441] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:19.218 [2024-11-29 12:04:24.579466] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:19.219 [2024-11-29 12:04:24.579557] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:20:19.219 [2024-11-29 12:04:24.579889] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:19.219 [2024-11-29 12:04:24.579915] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:20:19.219 [2024-11-29 12:04:24.580031] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.219 pt3 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.219 12:04:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.477 12:04:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.477 "name": "raid_bdev1", 00:20:19.477 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:19.477 "strip_size_kb": 0, 00:20:19.477 "state": "online", 00:20:19.477 "raid_level": "raid1", 00:20:19.477 "superblock": true, 00:20:19.477 "num_base_bdevs": 3, 00:20:19.477 "num_base_bdevs_discovered": 3, 00:20:19.477 "num_base_bdevs_operational": 3, 00:20:19.477 "base_bdevs_list": [ 00:20:19.477 { 00:20:19.477 "name": "pt1", 00:20:19.477 "uuid": "c597d8fe-1b77-52f9-a80c-970894c8219c", 00:20:19.477 "is_configured": true, 00:20:19.477 "data_offset": 2048, 00:20:19.477 "data_size": 63488 00:20:19.477 }, 00:20:19.478 { 00:20:19.478 "name": "pt2", 00:20:19.478 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:19.478 "is_configured": true, 00:20:19.478 "data_offset": 2048, 00:20:19.478 "data_size": 63488 00:20:19.478 }, 00:20:19.478 { 00:20:19.478 "name": "pt3", 00:20:19.478 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:19.478 "is_configured": true, 00:20:19.478 "data_offset": 2048, 00:20:19.478 "data_size": 63488 00:20:19.478 } 00:20:19.478 ] 00:20:19.478 }' 00:20:19.478 12:04:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.478 12:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:20.043 12:04:25 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:20.043 12:04:25 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:20.302 [2024-11-29 12:04:25.815126] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.559 12:04:25 -- bdev/bdev_raid.sh@430 -- # '[' 21b0c3a9-c8ca-4aad-b49c-a8395aa600d8 '!=' 21b0c3a9-c8ca-4aad-b49c-a8395aa600d8 ']' 00:20:20.559 12:04:25 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:20.559 12:04:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:20.559 12:04:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:20.559 12:04:25 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:20.817 [2024-11-29 12:04:26.110931] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.817 12:04:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.075 12:04:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.075 "name": "raid_bdev1", 00:20:21.075 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:21.075 "strip_size_kb": 0, 00:20:21.075 "state": "online", 00:20:21.075 "raid_level": "raid1", 00:20:21.075 "superblock": true, 00:20:21.075 "num_base_bdevs": 3, 00:20:21.075 "num_base_bdevs_discovered": 2, 00:20:21.075 "num_base_bdevs_operational": 2, 00:20:21.075 "base_bdevs_list": [ 00:20:21.075 { 00:20:21.075 "name": null, 00:20:21.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.075 "is_configured": false, 00:20:21.075 "data_offset": 2048, 00:20:21.075 "data_size": 63488 00:20:21.075 }, 00:20:21.075 { 00:20:21.075 "name": "pt2", 00:20:21.075 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:21.075 "is_configured": true, 00:20:21.075 "data_offset": 2048, 00:20:21.075 "data_size": 63488 00:20:21.075 }, 00:20:21.075 { 00:20:21.075 "name": "pt3", 00:20:21.075 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:21.075 "is_configured": true, 00:20:21.075 "data_offset": 2048, 00:20:21.075 "data_size": 63488 00:20:21.075 } 00:20:21.075 ] 00:20:21.075 }' 00:20:21.075 12:04:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.075 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.007 12:04:27 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:22.007 [2024-11-29 12:04:27.491246] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.007 [2024-11-29 12:04:27.491315] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.007 [2024-11-29 12:04:27.491444] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.007 [2024-11-29 12:04:27.491545] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.007 [2024-11-29 12:04:27.491561] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:20:22.007 12:04:27 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.007 12:04:27 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:22.573 12:04:27 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:22.573 12:04:27 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:22.573 12:04:27 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:22.573 12:04:27 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:22.573 12:04:27 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:22.830 12:04:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:22.830 12:04:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:22.830 12:04:28 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:23.089 12:04:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:23.089 12:04:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:23.089 12:04:28 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:23.089 12:04:28 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:23.089 12:04:28 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.347 [2024-11-29 12:04:28.775505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.347 [2024-11-29 12:04:28.775664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.347 [2024-11-29 12:04:28.775732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:23.347 [2024-11-29 12:04:28.775773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.347 [2024-11-29 12:04:28.779035] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.347 [2024-11-29 12:04:28.779129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.347 [2024-11-29 12:04:28.779307] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:23.347 [2024-11-29 12:04:28.779384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.347 pt2 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.347 12:04:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.605 12:04:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.605 "name": "raid_bdev1", 00:20:23.605 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:23.605 "strip_size_kb": 0, 00:20:23.605 "state": "configuring", 00:20:23.605 "raid_level": "raid1", 00:20:23.605 "superblock": true, 00:20:23.605 "num_base_bdevs": 3, 00:20:23.605 "num_base_bdevs_discovered": 1, 00:20:23.605 "num_base_bdevs_operational": 2, 00:20:23.605 "base_bdevs_list": [ 00:20:23.605 { 00:20:23.605 "name": null, 00:20:23.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.605 "is_configured": false, 00:20:23.605 "data_offset": 2048, 00:20:23.605 "data_size": 63488 00:20:23.605 }, 00:20:23.605 { 00:20:23.605 "name": "pt2", 00:20:23.605 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:23.605 "is_configured": true, 00:20:23.605 "data_offset": 2048, 00:20:23.605 "data_size": 63488 00:20:23.605 }, 00:20:23.605 { 00:20:23.605 "name": null, 00:20:23.605 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:23.605 "is_configured": false, 00:20:23.605 "data_offset": 2048, 00:20:23.605 "data_size": 63488 00:20:23.605 } 00:20:23.605 ] 00:20:23.605 }' 00:20:23.605 12:04:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.605 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:20:24.539 12:04:29 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:24.539 12:04:29 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:24.539 12:04:29 -- bdev/bdev_raid.sh@462 -- # i=2 00:20:24.539 12:04:29 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:24.797 [2024-11-29 12:04:30.139972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:24.797 [2024-11-29 12:04:30.140086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.797 [2024-11-29 12:04:30.140136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:24.797 [2024-11-29 12:04:30.140164] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.797 [2024-11-29 12:04:30.140677] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.797 [2024-11-29 12:04:30.140727] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:24.797 [2024-11-29 12:04:30.140845] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:24.797 [2024-11-29 12:04:30.140874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:24.797 [2024-11-29 12:04:30.141000] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:20:24.797 [2024-11-29 12:04:30.141024] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:24.797 [2024-11-29 12:04:30.141101] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:20:24.797 [2024-11-29 12:04:30.141453] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:20:24.797 [2024-11-29 12:04:30.141478] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:20:24.797 [2024-11-29 12:04:30.141611] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.797 pt3 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.797 12:04:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.798 12:04:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.798 12:04:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.798 12:04:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.798 12:04:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.057 12:04:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.057 "name": "raid_bdev1", 00:20:25.057 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:25.057 "strip_size_kb": 0, 00:20:25.057 "state": "online", 00:20:25.057 "raid_level": "raid1", 00:20:25.057 "superblock": true, 00:20:25.057 "num_base_bdevs": 3, 00:20:25.057 "num_base_bdevs_discovered": 2, 00:20:25.057 "num_base_bdevs_operational": 2, 00:20:25.057 "base_bdevs_list": [ 00:20:25.057 { 00:20:25.057 "name": null, 00:20:25.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.057 "is_configured": false, 00:20:25.057 "data_offset": 2048, 00:20:25.057 "data_size": 63488 00:20:25.057 }, 00:20:25.057 { 00:20:25.057 "name": "pt2", 00:20:25.057 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:25.057 "is_configured": true, 00:20:25.057 "data_offset": 2048, 00:20:25.057 "data_size": 63488 00:20:25.057 }, 00:20:25.057 { 00:20:25.057 "name": "pt3", 00:20:25.057 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:25.057 "is_configured": true, 00:20:25.057 "data_offset": 2048, 00:20:25.057 "data_size": 63488 00:20:25.057 } 00:20:25.057 ] 00:20:25.057 }' 00:20:25.057 12:04:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.057 12:04:30 -- common/autotest_common.sh@10 -- # set +x 00:20:25.993 12:04:31 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:20:25.993 12:04:31 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:25.993 [2024-11-29 12:04:31.440273] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.993 [2024-11-29 12:04:31.440337] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.993 [2024-11-29 12:04:31.440438] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.993 [2024-11-29 12:04:31.440510] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.993 [2024-11-29 12:04:31.440522] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:20:25.993 12:04:31 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.993 12:04:31 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:26.251 12:04:31 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:26.251 12:04:31 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:26.251 12:04:31 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:26.510 [2024-11-29 12:04:31.968389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:26.510 [2024-11-29 12:04:31.968530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.510 [2024-11-29 12:04:31.968580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:26.510 [2024-11-29 12:04:31.968605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.510 [2024-11-29 12:04:31.971503] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.510 [2024-11-29 12:04:31.971570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:26.510 [2024-11-29 12:04:31.971693] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:26.510 [2024-11-29 12:04:31.971746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:26.510 pt1 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.510 12:04:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.769 12:04:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.770 "name": "raid_bdev1", 00:20:26.770 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:26.770 "strip_size_kb": 0, 00:20:26.770 "state": "configuring", 00:20:26.770 "raid_level": "raid1", 00:20:26.770 "superblock": true, 00:20:26.770 "num_base_bdevs": 3, 00:20:26.770 "num_base_bdevs_discovered": 1, 00:20:26.770 "num_base_bdevs_operational": 3, 00:20:26.770 "base_bdevs_list": [ 00:20:26.770 { 00:20:26.770 "name": "pt1", 00:20:26.770 "uuid": "c597d8fe-1b77-52f9-a80c-970894c8219c", 00:20:26.770 "is_configured": true, 00:20:26.770 "data_offset": 2048, 00:20:26.770 "data_size": 63488 00:20:26.770 }, 00:20:26.770 { 00:20:26.770 "name": null, 00:20:26.770 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:26.770 "is_configured": false, 00:20:26.770 "data_offset": 2048, 00:20:26.770 "data_size": 63488 00:20:26.770 }, 00:20:26.770 { 00:20:26.770 "name": null, 00:20:26.770 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:26.770 "is_configured": false, 00:20:26.770 "data_offset": 2048, 00:20:26.770 "data_size": 63488 00:20:26.770 } 00:20:26.770 ] 00:20:26.770 }' 00:20:26.770 12:04:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.770 12:04:32 -- common/autotest_common.sh@10 -- # set +x 00:20:27.704 12:04:32 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:27.704 12:04:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:27.704 12:04:32 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:27.704 12:04:33 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:27.704 12:04:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:27.704 12:04:33 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:27.963 12:04:33 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:27.963 12:04:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:27.963 12:04:33 -- bdev/bdev_raid.sh@489 -- # i=2 00:20:27.963 12:04:33 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:28.221 [2024-11-29 12:04:33.572711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:28.221 [2024-11-29 12:04:33.572817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.221 [2024-11-29 12:04:33.572857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:28.221 [2024-11-29 12:04:33.572888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.221 [2024-11-29 12:04:33.573391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.221 [2024-11-29 12:04:33.573441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:28.221 [2024-11-29 12:04:33.573554] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:28.221 [2024-11-29 12:04:33.573570] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:28.221 [2024-11-29 12:04:33.573579] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.221 [2024-11-29 12:04:33.573627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:20:28.221 [2024-11-29 12:04:33.573690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:28.221 pt3 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.221 12:04:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.480 12:04:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.480 "name": "raid_bdev1", 00:20:28.480 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:28.480 "strip_size_kb": 0, 00:20:28.480 "state": "configuring", 00:20:28.480 "raid_level": "raid1", 00:20:28.480 "superblock": true, 00:20:28.480 "num_base_bdevs": 3, 00:20:28.480 "num_base_bdevs_discovered": 1, 00:20:28.480 "num_base_bdevs_operational": 2, 00:20:28.480 "base_bdevs_list": [ 00:20:28.480 { 00:20:28.480 "name": null, 00:20:28.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.480 "is_configured": false, 00:20:28.480 "data_offset": 2048, 00:20:28.480 "data_size": 63488 00:20:28.480 }, 00:20:28.480 { 00:20:28.480 "name": null, 00:20:28.480 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:28.480 "is_configured": false, 00:20:28.480 "data_offset": 2048, 00:20:28.480 "data_size": 63488 00:20:28.480 }, 00:20:28.480 { 00:20:28.480 "name": "pt3", 00:20:28.480 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:28.480 "is_configured": true, 00:20:28.480 "data_offset": 2048, 00:20:28.480 "data_size": 63488 00:20:28.480 } 00:20:28.480 ] 00:20:28.480 }' 00:20:28.480 12:04:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.480 12:04:33 -- common/autotest_common.sh@10 -- # set +x 00:20:29.047 12:04:34 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:29.047 12:04:34 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:29.047 12:04:34 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:29.305 [2024-11-29 12:04:34.773003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:29.305 [2024-11-29 12:04:34.773125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.305 [2024-11-29 12:04:34.773166] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:29.305 [2024-11-29 12:04:34.773198] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.305 [2024-11-29 12:04:34.773701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.305 [2024-11-29 12:04:34.773753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:29.305 [2024-11-29 12:04:34.773859] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:29.305 [2024-11-29 12:04:34.773893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.305 [2024-11-29 12:04:34.774021] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:20:29.305 [2024-11-29 12:04:34.774045] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:29.305 [2024-11-29 12:04:34.774123] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:20:29.305 [2024-11-29 12:04:34.774484] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:20:29.305 [2024-11-29 12:04:34.774509] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:20:29.305 [2024-11-29 12:04:34.774624] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.305 pt2 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.305 12:04:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.870 12:04:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.870 "name": "raid_bdev1", 00:20:29.870 "uuid": "21b0c3a9-c8ca-4aad-b49c-a8395aa600d8", 00:20:29.870 "strip_size_kb": 0, 00:20:29.870 "state": "online", 00:20:29.870 "raid_level": "raid1", 00:20:29.870 "superblock": true, 00:20:29.870 "num_base_bdevs": 3, 00:20:29.870 "num_base_bdevs_discovered": 2, 00:20:29.871 "num_base_bdevs_operational": 2, 00:20:29.871 "base_bdevs_list": [ 00:20:29.871 { 00:20:29.871 "name": null, 00:20:29.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.871 "is_configured": false, 00:20:29.871 "data_offset": 2048, 00:20:29.871 "data_size": 63488 00:20:29.871 }, 00:20:29.871 { 00:20:29.871 "name": "pt2", 00:20:29.871 "uuid": "03f5e4d9-21fc-5283-b779-f99a347b7f9b", 00:20:29.871 "is_configured": true, 00:20:29.871 "data_offset": 2048, 00:20:29.871 "data_size": 63488 00:20:29.871 }, 00:20:29.871 { 00:20:29.871 "name": "pt3", 00:20:29.871 "uuid": "8b57d76d-a166-5bbc-8921-0572bc6504a4", 00:20:29.871 "is_configured": true, 00:20:29.871 "data_offset": 2048, 00:20:29.871 "data_size": 63488 00:20:29.871 } 00:20:29.871 ] 00:20:29.871 }' 00:20:29.871 12:04:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.871 12:04:35 -- common/autotest_common.sh@10 -- # set +x 00:20:30.435 12:04:35 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:30.435 12:04:35 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:30.693 [2024-11-29 12:04:36.017534] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.693 12:04:36 -- bdev/bdev_raid.sh@506 -- # '[' 21b0c3a9-c8ca-4aad-b49c-a8395aa600d8 '!=' 21b0c3a9-c8ca-4aad-b49c-a8395aa600d8 ']' 00:20:30.693 12:04:36 -- bdev/bdev_raid.sh@511 -- # killprocess 129118 00:20:30.693 12:04:36 -- common/autotest_common.sh@936 -- # '[' -z 129118 ']' 00:20:30.693 12:04:36 -- common/autotest_common.sh@940 -- # kill -0 129118 00:20:30.693 12:04:36 -- common/autotest_common.sh@941 -- # uname 00:20:30.693 12:04:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:30.693 12:04:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129118 00:20:30.693 killing process with pid 129118 00:20:30.693 12:04:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:30.693 12:04:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:30.693 12:04:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129118' 00:20:30.693 12:04:36 -- common/autotest_common.sh@955 -- # kill 129118 00:20:30.693 12:04:36 -- common/autotest_common.sh@960 -- # wait 129118 00:20:30.693 [2024-11-29 12:04:36.060499] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.693 [2024-11-29 12:04:36.060613] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.693 [2024-11-29 12:04:36.060688] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.693 [2024-11-29 12:04:36.060700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:20:30.693 [2024-11-29 12:04:36.103490] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.952 ************************************ 00:20:30.952 END TEST raid_superblock_test 00:20:30.952 ************************************ 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:30.952 00:20:30.952 real 0m21.028s 00:20:30.952 user 0m39.678s 00:20:30.952 sys 0m2.470s 00:20:30.952 12:04:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:30.952 12:04:36 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:30.952 12:04:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:30.952 12:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:30.952 12:04:36 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 ************************************ 00:20:30.952 START TEST raid_state_function_test 00:20:30.952 ************************************ 00:20:30.952 12:04:36 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=129748 00:20:30.952 Process raid pid: 129748 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129748' 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129748 /var/tmp/spdk-raid.sock 00:20:30.952 12:04:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:30.952 12:04:36 -- common/autotest_common.sh@829 -- # '[' -z 129748 ']' 00:20:30.952 12:04:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:30.952 12:04:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:30.952 12:04:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:30.952 12:04:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.952 12:04:36 -- common/autotest_common.sh@10 -- # set +x 00:20:31.211 [2024-11-29 12:04:36.478393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:31.211 [2024-11-29 12:04:36.478655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.211 [2024-11-29 12:04:36.631296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.469 [2024-11-29 12:04:36.746624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.469 [2024-11-29 12:04:36.814575] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:32.035 12:04:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.035 12:04:37 -- common/autotest_common.sh@862 -- # return 0 00:20:32.035 12:04:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:32.293 [2024-11-29 12:04:37.737828] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:32.293 [2024-11-29 12:04:37.737941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:32.293 [2024-11-29 12:04:37.737957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.293 [2024-11-29 12:04:37.737979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.293 [2024-11-29 12:04:37.737987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.293 [2024-11-29 12:04:37.738041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.293 [2024-11-29 12:04:37.738051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:32.293 [2024-11-29 12:04:37.738080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.293 12:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.550 12:04:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.550 "name": "Existed_Raid", 00:20:32.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.550 "strip_size_kb": 64, 00:20:32.550 "state": "configuring", 00:20:32.550 "raid_level": "raid0", 00:20:32.550 "superblock": false, 00:20:32.550 "num_base_bdevs": 4, 00:20:32.550 "num_base_bdevs_discovered": 0, 00:20:32.550 "num_base_bdevs_operational": 4, 00:20:32.550 "base_bdevs_list": [ 00:20:32.550 { 00:20:32.550 "name": "BaseBdev1", 00:20:32.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.551 "is_configured": false, 00:20:32.551 "data_offset": 0, 00:20:32.551 "data_size": 0 00:20:32.551 }, 00:20:32.551 { 00:20:32.551 "name": "BaseBdev2", 00:20:32.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.551 "is_configured": false, 00:20:32.551 "data_offset": 0, 00:20:32.551 "data_size": 0 00:20:32.551 }, 00:20:32.551 { 00:20:32.551 "name": "BaseBdev3", 00:20:32.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.551 "is_configured": false, 00:20:32.551 "data_offset": 0, 00:20:32.551 "data_size": 0 00:20:32.551 }, 00:20:32.551 { 00:20:32.551 "name": "BaseBdev4", 00:20:32.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.551 "is_configured": false, 00:20:32.551 "data_offset": 0, 00:20:32.551 "data_size": 0 00:20:32.551 } 00:20:32.551 ] 00:20:32.551 }' 00:20:32.551 12:04:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.551 12:04:37 -- common/autotest_common.sh@10 -- # set +x 00:20:33.117 12:04:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:33.376 [2024-11-29 12:04:38.877917] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:33.376 [2024-11-29 12:04:38.877982] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:33.636 12:04:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:33.894 [2024-11-29 12:04:39.150029] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:33.894 [2024-11-29 12:04:39.150138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:33.894 [2024-11-29 12:04:39.150151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:33.894 [2024-11-29 12:04:39.150180] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:33.894 [2024-11-29 12:04:39.150189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:33.894 [2024-11-29 12:04:39.150209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:33.894 [2024-11-29 12:04:39.150216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:33.894 [2024-11-29 12:04:39.150244] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:33.894 12:04:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:34.152 [2024-11-29 12:04:39.449972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.152 BaseBdev1 00:20:34.152 12:04:39 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:34.152 12:04:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:34.152 12:04:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:34.152 12:04:39 -- common/autotest_common.sh@899 -- # local i 00:20:34.152 12:04:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:34.152 12:04:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:34.152 12:04:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.411 12:04:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:34.672 [ 00:20:34.672 { 00:20:34.672 "name": "BaseBdev1", 00:20:34.672 "aliases": [ 00:20:34.672 "21a2b784-d899-4673-8869-23e926d8f22c" 00:20:34.672 ], 00:20:34.672 "product_name": "Malloc disk", 00:20:34.672 "block_size": 512, 00:20:34.672 "num_blocks": 65536, 00:20:34.672 "uuid": "21a2b784-d899-4673-8869-23e926d8f22c", 00:20:34.672 "assigned_rate_limits": { 00:20:34.672 "rw_ios_per_sec": 0, 00:20:34.672 "rw_mbytes_per_sec": 0, 00:20:34.672 "r_mbytes_per_sec": 0, 00:20:34.672 "w_mbytes_per_sec": 0 00:20:34.672 }, 00:20:34.673 "claimed": true, 00:20:34.673 "claim_type": "exclusive_write", 00:20:34.673 "zoned": false, 00:20:34.673 "supported_io_types": { 00:20:34.673 "read": true, 00:20:34.673 "write": true, 00:20:34.673 "unmap": true, 00:20:34.673 "write_zeroes": true, 00:20:34.673 "flush": true, 00:20:34.673 "reset": true, 00:20:34.673 "compare": false, 00:20:34.673 "compare_and_write": false, 00:20:34.673 "abort": true, 00:20:34.673 "nvme_admin": false, 00:20:34.673 "nvme_io": false 00:20:34.673 }, 00:20:34.673 "memory_domains": [ 00:20:34.673 { 00:20:34.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.673 "dma_device_type": 2 00:20:34.673 } 00:20:34.673 ], 00:20:34.673 "driver_specific": {} 00:20:34.673 } 00:20:34.673 ] 00:20:34.673 12:04:39 -- common/autotest_common.sh@905 -- # return 0 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.673 12:04:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.938 12:04:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.938 "name": "Existed_Raid", 00:20:34.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.938 "strip_size_kb": 64, 00:20:34.938 "state": "configuring", 00:20:34.938 "raid_level": "raid0", 00:20:34.938 "superblock": false, 00:20:34.938 "num_base_bdevs": 4, 00:20:34.938 "num_base_bdevs_discovered": 1, 00:20:34.938 "num_base_bdevs_operational": 4, 00:20:34.938 "base_bdevs_list": [ 00:20:34.938 { 00:20:34.938 "name": "BaseBdev1", 00:20:34.938 "uuid": "21a2b784-d899-4673-8869-23e926d8f22c", 00:20:34.938 "is_configured": true, 00:20:34.938 "data_offset": 0, 00:20:34.938 "data_size": 65536 00:20:34.938 }, 00:20:34.938 { 00:20:34.938 "name": "BaseBdev2", 00:20:34.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.938 "is_configured": false, 00:20:34.938 "data_offset": 0, 00:20:34.938 "data_size": 0 00:20:34.938 }, 00:20:34.938 { 00:20:34.938 "name": "BaseBdev3", 00:20:34.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.938 "is_configured": false, 00:20:34.938 "data_offset": 0, 00:20:34.938 "data_size": 0 00:20:34.938 }, 00:20:34.938 { 00:20:34.938 "name": "BaseBdev4", 00:20:34.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.938 "is_configured": false, 00:20:34.938 "data_offset": 0, 00:20:34.938 "data_size": 0 00:20:34.938 } 00:20:34.938 ] 00:20:34.938 }' 00:20:34.938 12:04:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.938 12:04:40 -- common/autotest_common.sh@10 -- # set +x 00:20:35.506 12:04:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:35.765 [2024-11-29 12:04:41.186495] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.765 [2024-11-29 12:04:41.186610] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:35.765 12:04:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:35.765 12:04:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:36.023 [2024-11-29 12:04:41.494689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.023 [2024-11-29 12:04:41.497055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.023 [2024-11-29 12:04:41.497163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.023 [2024-11-29 12:04:41.497177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.023 [2024-11-29 12:04:41.497205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.023 [2024-11-29 12:04:41.497215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:36.023 [2024-11-29 12:04:41.497234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:36.023 12:04:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.024 12:04:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.024 12:04:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.024 12:04:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.024 12:04:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.024 12:04:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.281 12:04:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.281 "name": "Existed_Raid", 00:20:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.281 "strip_size_kb": 64, 00:20:36.281 "state": "configuring", 00:20:36.281 "raid_level": "raid0", 00:20:36.281 "superblock": false, 00:20:36.281 "num_base_bdevs": 4, 00:20:36.281 "num_base_bdevs_discovered": 1, 00:20:36.281 "num_base_bdevs_operational": 4, 00:20:36.281 "base_bdevs_list": [ 00:20:36.281 { 00:20:36.281 "name": "BaseBdev1", 00:20:36.281 "uuid": "21a2b784-d899-4673-8869-23e926d8f22c", 00:20:36.281 "is_configured": true, 00:20:36.281 "data_offset": 0, 00:20:36.281 "data_size": 65536 00:20:36.281 }, 00:20:36.281 { 00:20:36.281 "name": "BaseBdev2", 00:20:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.281 "is_configured": false, 00:20:36.281 "data_offset": 0, 00:20:36.281 "data_size": 0 00:20:36.281 }, 00:20:36.281 { 00:20:36.281 "name": "BaseBdev3", 00:20:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.281 "is_configured": false, 00:20:36.281 "data_offset": 0, 00:20:36.281 "data_size": 0 00:20:36.281 }, 00:20:36.281 { 00:20:36.281 "name": "BaseBdev4", 00:20:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.281 "is_configured": false, 00:20:36.281 "data_offset": 0, 00:20:36.282 "data_size": 0 00:20:36.282 } 00:20:36.282 ] 00:20:36.282 }' 00:20:36.282 12:04:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.282 12:04:41 -- common/autotest_common.sh@10 -- # set +x 00:20:37.217 12:04:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:37.475 [2024-11-29 12:04:42.827011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.475 BaseBdev2 00:20:37.475 12:04:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:37.475 12:04:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:37.475 12:04:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:37.475 12:04:42 -- common/autotest_common.sh@899 -- # local i 00:20:37.475 12:04:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:37.475 12:04:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:37.475 12:04:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.734 12:04:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:37.993 [ 00:20:37.993 { 00:20:37.993 "name": "BaseBdev2", 00:20:37.993 "aliases": [ 00:20:37.993 "3ce729d3-2aac-4fd2-8763-02bc346ed55f" 00:20:37.993 ], 00:20:37.993 "product_name": "Malloc disk", 00:20:37.993 "block_size": 512, 00:20:37.993 "num_blocks": 65536, 00:20:37.993 "uuid": "3ce729d3-2aac-4fd2-8763-02bc346ed55f", 00:20:37.993 "assigned_rate_limits": { 00:20:37.993 "rw_ios_per_sec": 0, 00:20:37.993 "rw_mbytes_per_sec": 0, 00:20:37.993 "r_mbytes_per_sec": 0, 00:20:37.993 "w_mbytes_per_sec": 0 00:20:37.993 }, 00:20:37.993 "claimed": true, 00:20:37.993 "claim_type": "exclusive_write", 00:20:37.993 "zoned": false, 00:20:37.993 "supported_io_types": { 00:20:37.993 "read": true, 00:20:37.993 "write": true, 00:20:37.993 "unmap": true, 00:20:37.993 "write_zeroes": true, 00:20:37.993 "flush": true, 00:20:37.993 "reset": true, 00:20:37.993 "compare": false, 00:20:37.993 "compare_and_write": false, 00:20:37.993 "abort": true, 00:20:37.993 "nvme_admin": false, 00:20:37.993 "nvme_io": false 00:20:37.993 }, 00:20:37.993 "memory_domains": [ 00:20:37.993 { 00:20:37.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.993 "dma_device_type": 2 00:20:37.993 } 00:20:37.993 ], 00:20:37.993 "driver_specific": {} 00:20:37.993 } 00:20:37.993 ] 00:20:37.993 12:04:43 -- common/autotest_common.sh@905 -- # return 0 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.993 12:04:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.252 12:04:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.252 "name": "Existed_Raid", 00:20:38.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.252 "strip_size_kb": 64, 00:20:38.252 "state": "configuring", 00:20:38.252 "raid_level": "raid0", 00:20:38.252 "superblock": false, 00:20:38.252 "num_base_bdevs": 4, 00:20:38.252 "num_base_bdevs_discovered": 2, 00:20:38.252 "num_base_bdevs_operational": 4, 00:20:38.252 "base_bdevs_list": [ 00:20:38.252 { 00:20:38.252 "name": "BaseBdev1", 00:20:38.252 "uuid": "21a2b784-d899-4673-8869-23e926d8f22c", 00:20:38.252 "is_configured": true, 00:20:38.252 "data_offset": 0, 00:20:38.252 "data_size": 65536 00:20:38.252 }, 00:20:38.252 { 00:20:38.252 "name": "BaseBdev2", 00:20:38.252 "uuid": "3ce729d3-2aac-4fd2-8763-02bc346ed55f", 00:20:38.252 "is_configured": true, 00:20:38.252 "data_offset": 0, 00:20:38.252 "data_size": 65536 00:20:38.252 }, 00:20:38.252 { 00:20:38.252 "name": "BaseBdev3", 00:20:38.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.252 "is_configured": false, 00:20:38.252 "data_offset": 0, 00:20:38.252 "data_size": 0 00:20:38.252 }, 00:20:38.252 { 00:20:38.252 "name": "BaseBdev4", 00:20:38.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.252 "is_configured": false, 00:20:38.252 "data_offset": 0, 00:20:38.252 "data_size": 0 00:20:38.252 } 00:20:38.252 ] 00:20:38.252 }' 00:20:38.252 12:04:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.252 12:04:43 -- common/autotest_common.sh@10 -- # set +x 00:20:39.187 12:04:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:39.187 [2024-11-29 12:04:44.680885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.187 BaseBdev3 00:20:39.187 12:04:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:39.187 12:04:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:39.187 12:04:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:39.187 12:04:44 -- common/autotest_common.sh@899 -- # local i 00:20:39.187 12:04:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:39.187 12:04:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:39.187 12:04:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.756 12:04:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:40.019 [ 00:20:40.019 { 00:20:40.019 "name": "BaseBdev3", 00:20:40.019 "aliases": [ 00:20:40.019 "b8d854f1-7653-4ea9-8008-e1820f1505ff" 00:20:40.019 ], 00:20:40.019 "product_name": "Malloc disk", 00:20:40.019 "block_size": 512, 00:20:40.019 "num_blocks": 65536, 00:20:40.019 "uuid": "b8d854f1-7653-4ea9-8008-e1820f1505ff", 00:20:40.019 "assigned_rate_limits": { 00:20:40.019 "rw_ios_per_sec": 0, 00:20:40.019 "rw_mbytes_per_sec": 0, 00:20:40.019 "r_mbytes_per_sec": 0, 00:20:40.019 "w_mbytes_per_sec": 0 00:20:40.019 }, 00:20:40.019 "claimed": true, 00:20:40.019 "claim_type": "exclusive_write", 00:20:40.019 "zoned": false, 00:20:40.019 "supported_io_types": { 00:20:40.019 "read": true, 00:20:40.019 "write": true, 00:20:40.019 "unmap": true, 00:20:40.019 "write_zeroes": true, 00:20:40.019 "flush": true, 00:20:40.019 "reset": true, 00:20:40.019 "compare": false, 00:20:40.019 "compare_and_write": false, 00:20:40.019 "abort": true, 00:20:40.019 "nvme_admin": false, 00:20:40.019 "nvme_io": false 00:20:40.019 }, 00:20:40.019 "memory_domains": [ 00:20:40.019 { 00:20:40.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.019 "dma_device_type": 2 00:20:40.019 } 00:20:40.019 ], 00:20:40.019 "driver_specific": {} 00:20:40.019 } 00:20:40.019 ] 00:20:40.019 12:04:45 -- common/autotest_common.sh@905 -- # return 0 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.019 12:04:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.276 12:04:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.276 "name": "Existed_Raid", 00:20:40.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.276 "strip_size_kb": 64, 00:20:40.276 "state": "configuring", 00:20:40.276 "raid_level": "raid0", 00:20:40.276 "superblock": false, 00:20:40.276 "num_base_bdevs": 4, 00:20:40.276 "num_base_bdevs_discovered": 3, 00:20:40.276 "num_base_bdevs_operational": 4, 00:20:40.276 "base_bdevs_list": [ 00:20:40.276 { 00:20:40.276 "name": "BaseBdev1", 00:20:40.276 "uuid": "21a2b784-d899-4673-8869-23e926d8f22c", 00:20:40.276 "is_configured": true, 00:20:40.276 "data_offset": 0, 00:20:40.276 "data_size": 65536 00:20:40.276 }, 00:20:40.276 { 00:20:40.276 "name": "BaseBdev2", 00:20:40.276 "uuid": "3ce729d3-2aac-4fd2-8763-02bc346ed55f", 00:20:40.276 "is_configured": true, 00:20:40.276 "data_offset": 0, 00:20:40.276 "data_size": 65536 00:20:40.276 }, 00:20:40.276 { 00:20:40.276 "name": "BaseBdev3", 00:20:40.276 "uuid": "b8d854f1-7653-4ea9-8008-e1820f1505ff", 00:20:40.276 "is_configured": true, 00:20:40.276 "data_offset": 0, 00:20:40.276 "data_size": 65536 00:20:40.276 }, 00:20:40.276 { 00:20:40.276 "name": "BaseBdev4", 00:20:40.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.276 "is_configured": false, 00:20:40.276 "data_offset": 0, 00:20:40.276 "data_size": 0 00:20:40.276 } 00:20:40.276 ] 00:20:40.276 }' 00:20:40.276 12:04:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.276 12:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:40.841 12:04:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:41.099 [2024-11-29 12:04:46.514525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:41.099 [2024-11-29 12:04:46.514584] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:20:41.099 [2024-11-29 12:04:46.514595] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:41.099 [2024-11-29 12:04:46.514756] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:20:41.099 [2024-11-29 12:04:46.515208] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:20:41.099 [2024-11-29 12:04:46.515233] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:20:41.099 [2024-11-29 12:04:46.515499] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.099 BaseBdev4 00:20:41.099 12:04:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:41.099 12:04:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:41.099 12:04:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:41.099 12:04:46 -- common/autotest_common.sh@899 -- # local i 00:20:41.099 12:04:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:41.099 12:04:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:41.099 12:04:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:41.357 12:04:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:41.615 [ 00:20:41.615 { 00:20:41.615 "name": "BaseBdev4", 00:20:41.615 "aliases": [ 00:20:41.615 "1f9f0a95-bf48-4a45-8b74-ceb3361db444" 00:20:41.615 ], 00:20:41.615 "product_name": "Malloc disk", 00:20:41.615 "block_size": 512, 00:20:41.615 "num_blocks": 65536, 00:20:41.615 "uuid": "1f9f0a95-bf48-4a45-8b74-ceb3361db444", 00:20:41.615 "assigned_rate_limits": { 00:20:41.615 "rw_ios_per_sec": 0, 00:20:41.615 "rw_mbytes_per_sec": 0, 00:20:41.615 "r_mbytes_per_sec": 0, 00:20:41.615 "w_mbytes_per_sec": 0 00:20:41.615 }, 00:20:41.615 "claimed": true, 00:20:41.615 "claim_type": "exclusive_write", 00:20:41.615 "zoned": false, 00:20:41.615 "supported_io_types": { 00:20:41.615 "read": true, 00:20:41.615 "write": true, 00:20:41.615 "unmap": true, 00:20:41.615 "write_zeroes": true, 00:20:41.615 "flush": true, 00:20:41.615 "reset": true, 00:20:41.615 "compare": false, 00:20:41.615 "compare_and_write": false, 00:20:41.615 "abort": true, 00:20:41.615 "nvme_admin": false, 00:20:41.615 "nvme_io": false 00:20:41.615 }, 00:20:41.615 "memory_domains": [ 00:20:41.615 { 00:20:41.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.615 "dma_device_type": 2 00:20:41.615 } 00:20:41.615 ], 00:20:41.615 "driver_specific": {} 00:20:41.615 } 00:20:41.615 ] 00:20:41.615 12:04:47 -- common/autotest_common.sh@905 -- # return 0 00:20:41.615 12:04:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.616 12:04:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.874 12:04:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.874 "name": "Existed_Raid", 00:20:41.874 "uuid": "01ef226d-8381-4776-ba9a-359e976cff2e", 00:20:41.874 "strip_size_kb": 64, 00:20:41.874 "state": "online", 00:20:41.874 "raid_level": "raid0", 00:20:41.874 "superblock": false, 00:20:41.874 "num_base_bdevs": 4, 00:20:41.874 "num_base_bdevs_discovered": 4, 00:20:41.874 "num_base_bdevs_operational": 4, 00:20:41.874 "base_bdevs_list": [ 00:20:41.874 { 00:20:41.874 "name": "BaseBdev1", 00:20:41.874 "uuid": "21a2b784-d899-4673-8869-23e926d8f22c", 00:20:41.874 "is_configured": true, 00:20:41.874 "data_offset": 0, 00:20:41.874 "data_size": 65536 00:20:41.874 }, 00:20:41.874 { 00:20:41.874 "name": "BaseBdev2", 00:20:41.874 "uuid": "3ce729d3-2aac-4fd2-8763-02bc346ed55f", 00:20:41.874 "is_configured": true, 00:20:41.874 "data_offset": 0, 00:20:41.874 "data_size": 65536 00:20:41.874 }, 00:20:41.874 { 00:20:41.874 "name": "BaseBdev3", 00:20:41.874 "uuid": "b8d854f1-7653-4ea9-8008-e1820f1505ff", 00:20:41.874 "is_configured": true, 00:20:41.874 "data_offset": 0, 00:20:41.874 "data_size": 65536 00:20:41.874 }, 00:20:41.874 { 00:20:41.874 "name": "BaseBdev4", 00:20:41.874 "uuid": "1f9f0a95-bf48-4a45-8b74-ceb3361db444", 00:20:41.874 "is_configured": true, 00:20:41.874 "data_offset": 0, 00:20:41.874 "data_size": 65536 00:20:41.874 } 00:20:41.874 ] 00:20:41.874 }' 00:20:41.874 12:04:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.874 12:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:42.841 [2024-11-29 12:04:48.235172] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.841 [2024-11-29 12:04:48.235515] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.841 [2024-11-29 12:04:48.235751] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.841 12:04:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.099 12:04:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.099 "name": "Existed_Raid", 00:20:43.099 "uuid": "01ef226d-8381-4776-ba9a-359e976cff2e", 00:20:43.099 "strip_size_kb": 64, 00:20:43.099 "state": "offline", 00:20:43.099 "raid_level": "raid0", 00:20:43.099 "superblock": false, 00:20:43.099 "num_base_bdevs": 4, 00:20:43.099 "num_base_bdevs_discovered": 3, 00:20:43.099 "num_base_bdevs_operational": 3, 00:20:43.099 "base_bdevs_list": [ 00:20:43.099 { 00:20:43.099 "name": null, 00:20:43.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.099 "is_configured": false, 00:20:43.099 "data_offset": 0, 00:20:43.099 "data_size": 65536 00:20:43.099 }, 00:20:43.099 { 00:20:43.099 "name": "BaseBdev2", 00:20:43.099 "uuid": "3ce729d3-2aac-4fd2-8763-02bc346ed55f", 00:20:43.099 "is_configured": true, 00:20:43.099 "data_offset": 0, 00:20:43.099 "data_size": 65536 00:20:43.099 }, 00:20:43.099 { 00:20:43.099 "name": "BaseBdev3", 00:20:43.099 "uuid": "b8d854f1-7653-4ea9-8008-e1820f1505ff", 00:20:43.099 "is_configured": true, 00:20:43.099 "data_offset": 0, 00:20:43.099 "data_size": 65536 00:20:43.099 }, 00:20:43.099 { 00:20:43.099 "name": "BaseBdev4", 00:20:43.099 "uuid": "1f9f0a95-bf48-4a45-8b74-ceb3361db444", 00:20:43.099 "is_configured": true, 00:20:43.099 "data_offset": 0, 00:20:43.099 "data_size": 65536 00:20:43.099 } 00:20:43.099 ] 00:20:43.099 }' 00:20:43.099 12:04:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.099 12:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:44.033 12:04:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:44.291 [2024-11-29 12:04:49.718672] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:44.291 12:04:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:44.291 12:04:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:44.291 12:04:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.291 12:04:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:44.859 12:04:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:44.859 12:04:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:44.859 12:04:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:44.859 [2024-11-29 12:04:50.353402] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:45.117 12:04:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:45.117 12:04:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:45.117 12:04:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.117 12:04:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:45.375 12:04:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:45.375 12:04:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.375 12:04:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:45.634 [2024-11-29 12:04:50.936257] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:45.634 [2024-11-29 12:04:50.936644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:20:45.634 12:04:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:45.634 12:04:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:45.634 12:04:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:45.634 12:04:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.892 12:04:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:45.892 12:04:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:45.892 12:04:51 -- bdev/bdev_raid.sh@287 -- # killprocess 129748 00:20:45.892 12:04:51 -- common/autotest_common.sh@936 -- # '[' -z 129748 ']' 00:20:45.892 12:04:51 -- common/autotest_common.sh@940 -- # kill -0 129748 00:20:45.892 12:04:51 -- common/autotest_common.sh@941 -- # uname 00:20:45.892 12:04:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:45.892 12:04:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129748 00:20:45.892 killing process with pid 129748 00:20:45.892 12:04:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:45.892 12:04:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:45.892 12:04:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129748' 00:20:45.892 12:04:51 -- common/autotest_common.sh@955 -- # kill 129748 00:20:45.892 12:04:51 -- common/autotest_common.sh@960 -- # wait 129748 00:20:45.892 [2024-11-29 12:04:51.282023] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:45.892 [2024-11-29 12:04:51.282108] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.150 ************************************ 00:20:46.150 END TEST raid_state_function_test 00:20:46.150 ************************************ 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:46.150 00:20:46.150 real 0m15.112s 00:20:46.150 user 0m27.970s 00:20:46.150 sys 0m1.958s 00:20:46.150 12:04:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:46.150 12:04:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:20:46.150 12:04:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:46.150 12:04:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.150 12:04:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.150 ************************************ 00:20:46.150 START TEST raid_state_function_test_sb 00:20:46.150 ************************************ 00:20:46.150 12:04:51 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:46.150 12:04:51 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=130198 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130198' 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:46.151 Process raid pid: 130198 00:20:46.151 12:04:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130198 /var/tmp/spdk-raid.sock 00:20:46.151 12:04:51 -- common/autotest_common.sh@829 -- # '[' -z 130198 ']' 00:20:46.151 12:04:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:46.151 12:04:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.151 12:04:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:46.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:46.151 12:04:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.151 12:04:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.151 [2024-11-29 12:04:51.658758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:46.151 [2024-11-29 12:04:51.659273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.409 [2024-11-29 12:04:51.809479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.409 [2024-11-29 12:04:51.906113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.668 [2024-11-29 12:04:51.960894] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.234 12:04:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.234 12:04:52 -- common/autotest_common.sh@862 -- # return 0 00:20:47.234 12:04:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:47.493 [2024-11-29 12:04:52.907189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:47.493 [2024-11-29 12:04:52.907623] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:47.493 [2024-11-29 12:04:52.907758] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:47.493 [2024-11-29 12:04:52.907836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:47.493 [2024-11-29 12:04:52.907949] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:47.493 [2024-11-29 12:04:52.908060] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:47.493 [2024-11-29 12:04:52.908283] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:47.493 [2024-11-29 12:04:52.908376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.493 12:04:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.753 12:04:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:47.753 "name": "Existed_Raid", 00:20:47.753 "uuid": "a0a71fdc-4aea-4efd-ae28-bc4ff247620a", 00:20:47.753 "strip_size_kb": 64, 00:20:47.753 "state": "configuring", 00:20:47.753 "raid_level": "raid0", 00:20:47.753 "superblock": true, 00:20:47.753 "num_base_bdevs": 4, 00:20:47.753 "num_base_bdevs_discovered": 0, 00:20:47.753 "num_base_bdevs_operational": 4, 00:20:47.753 "base_bdevs_list": [ 00:20:47.753 { 00:20:47.753 "name": "BaseBdev1", 00:20:47.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.753 "is_configured": false, 00:20:47.753 "data_offset": 0, 00:20:47.753 "data_size": 0 00:20:47.753 }, 00:20:47.753 { 00:20:47.753 "name": "BaseBdev2", 00:20:47.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.753 "is_configured": false, 00:20:47.753 "data_offset": 0, 00:20:47.753 "data_size": 0 00:20:47.753 }, 00:20:47.753 { 00:20:47.753 "name": "BaseBdev3", 00:20:47.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.753 "is_configured": false, 00:20:47.753 "data_offset": 0, 00:20:47.753 "data_size": 0 00:20:47.753 }, 00:20:47.753 { 00:20:47.753 "name": "BaseBdev4", 00:20:47.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.753 "is_configured": false, 00:20:47.753 "data_offset": 0, 00:20:47.753 "data_size": 0 00:20:47.753 } 00:20:47.753 ] 00:20:47.753 }' 00:20:47.753 12:04:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:47.753 12:04:53 -- common/autotest_common.sh@10 -- # set +x 00:20:48.688 12:04:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:48.951 [2024-11-29 12:04:54.203252] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:48.951 [2024-11-29 12:04:54.203585] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:48.951 12:04:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:49.225 [2024-11-29 12:04:54.495375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:49.225 [2024-11-29 12:04:54.495735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:49.225 [2024-11-29 12:04:54.495888] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:49.225 [2024-11-29 12:04:54.495960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:49.225 [2024-11-29 12:04:54.496087] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:49.225 [2024-11-29 12:04:54.496150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:49.225 [2024-11-29 12:04:54.496260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:49.225 [2024-11-29 12:04:54.496328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:49.225 12:04:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:49.483 [2024-11-29 12:04:54.786947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.483 BaseBdev1 00:20:49.483 12:04:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:49.483 12:04:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:49.483 12:04:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:49.483 12:04:54 -- common/autotest_common.sh@899 -- # local i 00:20:49.483 12:04:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:49.483 12:04:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:49.483 12:04:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:49.740 12:04:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:49.997 [ 00:20:49.997 { 00:20:49.997 "name": "BaseBdev1", 00:20:49.997 "aliases": [ 00:20:49.997 "6693e31d-fc8e-48a5-ae2d-75654c57aaec" 00:20:49.997 ], 00:20:49.997 "product_name": "Malloc disk", 00:20:49.997 "block_size": 512, 00:20:49.997 "num_blocks": 65536, 00:20:49.997 "uuid": "6693e31d-fc8e-48a5-ae2d-75654c57aaec", 00:20:49.997 "assigned_rate_limits": { 00:20:49.997 "rw_ios_per_sec": 0, 00:20:49.997 "rw_mbytes_per_sec": 0, 00:20:49.997 "r_mbytes_per_sec": 0, 00:20:49.997 "w_mbytes_per_sec": 0 00:20:49.997 }, 00:20:49.997 "claimed": true, 00:20:49.997 "claim_type": "exclusive_write", 00:20:49.997 "zoned": false, 00:20:49.997 "supported_io_types": { 00:20:49.997 "read": true, 00:20:49.997 "write": true, 00:20:49.997 "unmap": true, 00:20:49.997 "write_zeroes": true, 00:20:49.997 "flush": true, 00:20:49.997 "reset": true, 00:20:49.998 "compare": false, 00:20:49.998 "compare_and_write": false, 00:20:49.998 "abort": true, 00:20:49.998 "nvme_admin": false, 00:20:49.998 "nvme_io": false 00:20:49.998 }, 00:20:49.998 "memory_domains": [ 00:20:49.998 { 00:20:49.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.998 "dma_device_type": 2 00:20:49.998 } 00:20:49.998 ], 00:20:49.998 "driver_specific": {} 00:20:49.998 } 00:20:49.998 ] 00:20:49.998 12:04:55 -- common/autotest_common.sh@905 -- # return 0 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.998 12:04:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.256 12:04:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.256 "name": "Existed_Raid", 00:20:50.256 "uuid": "74d03871-1091-4b07-ba1d-c1b526624363", 00:20:50.256 "strip_size_kb": 64, 00:20:50.256 "state": "configuring", 00:20:50.256 "raid_level": "raid0", 00:20:50.256 "superblock": true, 00:20:50.256 "num_base_bdevs": 4, 00:20:50.256 "num_base_bdevs_discovered": 1, 00:20:50.256 "num_base_bdevs_operational": 4, 00:20:50.256 "base_bdevs_list": [ 00:20:50.256 { 00:20:50.256 "name": "BaseBdev1", 00:20:50.256 "uuid": "6693e31d-fc8e-48a5-ae2d-75654c57aaec", 00:20:50.256 "is_configured": true, 00:20:50.256 "data_offset": 2048, 00:20:50.256 "data_size": 63488 00:20:50.256 }, 00:20:50.256 { 00:20:50.256 "name": "BaseBdev2", 00:20:50.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.256 "is_configured": false, 00:20:50.256 "data_offset": 0, 00:20:50.256 "data_size": 0 00:20:50.256 }, 00:20:50.256 { 00:20:50.256 "name": "BaseBdev3", 00:20:50.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.256 "is_configured": false, 00:20:50.256 "data_offset": 0, 00:20:50.256 "data_size": 0 00:20:50.256 }, 00:20:50.256 { 00:20:50.256 "name": "BaseBdev4", 00:20:50.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.256 "is_configured": false, 00:20:50.256 "data_offset": 0, 00:20:50.256 "data_size": 0 00:20:50.256 } 00:20:50.256 ] 00:20:50.256 }' 00:20:50.256 12:04:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.256 12:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.823 12:04:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:51.082 [2024-11-29 12:04:56.499397] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.082 [2024-11-29 12:04:56.499801] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:51.082 12:04:56 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:51.082 12:04:56 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:51.341 12:04:56 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:51.599 BaseBdev1 00:20:51.599 12:04:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:51.599 12:04:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:51.599 12:04:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:51.599 12:04:57 -- common/autotest_common.sh@899 -- # local i 00:20:51.599 12:04:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:51.599 12:04:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:51.599 12:04:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.856 12:04:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:52.113 [ 00:20:52.113 { 00:20:52.113 "name": "BaseBdev1", 00:20:52.113 "aliases": [ 00:20:52.113 "2317d608-a13e-473b-9bfd-b869ebc2cd4d" 00:20:52.113 ], 00:20:52.113 "product_name": "Malloc disk", 00:20:52.113 "block_size": 512, 00:20:52.113 "num_blocks": 65536, 00:20:52.113 "uuid": "2317d608-a13e-473b-9bfd-b869ebc2cd4d", 00:20:52.113 "assigned_rate_limits": { 00:20:52.113 "rw_ios_per_sec": 0, 00:20:52.113 "rw_mbytes_per_sec": 0, 00:20:52.113 "r_mbytes_per_sec": 0, 00:20:52.113 "w_mbytes_per_sec": 0 00:20:52.113 }, 00:20:52.113 "claimed": false, 00:20:52.113 "zoned": false, 00:20:52.113 "supported_io_types": { 00:20:52.113 "read": true, 00:20:52.113 "write": true, 00:20:52.113 "unmap": true, 00:20:52.113 "write_zeroes": true, 00:20:52.113 "flush": true, 00:20:52.113 "reset": true, 00:20:52.113 "compare": false, 00:20:52.113 "compare_and_write": false, 00:20:52.113 "abort": true, 00:20:52.113 "nvme_admin": false, 00:20:52.113 "nvme_io": false 00:20:52.113 }, 00:20:52.113 "memory_domains": [ 00:20:52.113 { 00:20:52.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.113 "dma_device_type": 2 00:20:52.113 } 00:20:52.113 ], 00:20:52.113 "driver_specific": {} 00:20:52.113 } 00:20:52.113 ] 00:20:52.113 12:04:57 -- common/autotest_common.sh@905 -- # return 0 00:20:52.113 12:04:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:52.371 [2024-11-29 12:04:57.777328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.371 [2024-11-29 12:04:57.779988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.371 [2024-11-29 12:04:57.780282] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.371 [2024-11-29 12:04:57.780410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.371 [2024-11-29 12:04:57.780480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.371 [2024-11-29 12:04:57.780641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:52.371 [2024-11-29 12:04:57.780705] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.371 12:04:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.629 12:04:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.629 "name": "Existed_Raid", 00:20:52.629 "uuid": "e3a65d3b-0f60-461c-9559-bc8573548503", 00:20:52.629 "strip_size_kb": 64, 00:20:52.629 "state": "configuring", 00:20:52.629 "raid_level": "raid0", 00:20:52.629 "superblock": true, 00:20:52.629 "num_base_bdevs": 4, 00:20:52.629 "num_base_bdevs_discovered": 1, 00:20:52.629 "num_base_bdevs_operational": 4, 00:20:52.629 "base_bdevs_list": [ 00:20:52.629 { 00:20:52.629 "name": "BaseBdev1", 00:20:52.629 "uuid": "2317d608-a13e-473b-9bfd-b869ebc2cd4d", 00:20:52.629 "is_configured": true, 00:20:52.629 "data_offset": 2048, 00:20:52.629 "data_size": 63488 00:20:52.629 }, 00:20:52.629 { 00:20:52.629 "name": "BaseBdev2", 00:20:52.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.629 "is_configured": false, 00:20:52.629 "data_offset": 0, 00:20:52.629 "data_size": 0 00:20:52.629 }, 00:20:52.629 { 00:20:52.629 "name": "BaseBdev3", 00:20:52.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.629 "is_configured": false, 00:20:52.629 "data_offset": 0, 00:20:52.629 "data_size": 0 00:20:52.629 }, 00:20:52.629 { 00:20:52.629 "name": "BaseBdev4", 00:20:52.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.629 "is_configured": false, 00:20:52.629 "data_offset": 0, 00:20:52.629 "data_size": 0 00:20:52.629 } 00:20:52.629 ] 00:20:52.629 }' 00:20:52.629 12:04:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.629 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:53.194 12:04:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:53.758 [2024-11-29 12:04:58.982255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.758 BaseBdev2 00:20:53.758 12:04:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:53.758 12:04:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:53.758 12:04:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:53.758 12:04:59 -- common/autotest_common.sh@899 -- # local i 00:20:53.759 12:04:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:53.759 12:04:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:53.759 12:04:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.016 12:04:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:54.016 [ 00:20:54.016 { 00:20:54.016 "name": "BaseBdev2", 00:20:54.016 "aliases": [ 00:20:54.016 "c3e873da-1346-4f00-902c-5b2445581374" 00:20:54.016 ], 00:20:54.016 "product_name": "Malloc disk", 00:20:54.016 "block_size": 512, 00:20:54.016 "num_blocks": 65536, 00:20:54.016 "uuid": "c3e873da-1346-4f00-902c-5b2445581374", 00:20:54.016 "assigned_rate_limits": { 00:20:54.016 "rw_ios_per_sec": 0, 00:20:54.016 "rw_mbytes_per_sec": 0, 00:20:54.016 "r_mbytes_per_sec": 0, 00:20:54.016 "w_mbytes_per_sec": 0 00:20:54.016 }, 00:20:54.016 "claimed": true, 00:20:54.016 "claim_type": "exclusive_write", 00:20:54.016 "zoned": false, 00:20:54.016 "supported_io_types": { 00:20:54.016 "read": true, 00:20:54.016 "write": true, 00:20:54.016 "unmap": true, 00:20:54.016 "write_zeroes": true, 00:20:54.016 "flush": true, 00:20:54.016 "reset": true, 00:20:54.016 "compare": false, 00:20:54.016 "compare_and_write": false, 00:20:54.016 "abort": true, 00:20:54.016 "nvme_admin": false, 00:20:54.016 "nvme_io": false 00:20:54.016 }, 00:20:54.016 "memory_domains": [ 00:20:54.016 { 00:20:54.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.016 "dma_device_type": 2 00:20:54.016 } 00:20:54.016 ], 00:20:54.016 "driver_specific": {} 00:20:54.016 } 00:20:54.016 ] 00:20:54.016 12:04:59 -- common/autotest_common.sh@905 -- # return 0 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.016 12:04:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.274 12:04:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.274 12:04:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.533 12:04:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.533 "name": "Existed_Raid", 00:20:54.533 "uuid": "e3a65d3b-0f60-461c-9559-bc8573548503", 00:20:54.533 "strip_size_kb": 64, 00:20:54.533 "state": "configuring", 00:20:54.533 "raid_level": "raid0", 00:20:54.533 "superblock": true, 00:20:54.533 "num_base_bdevs": 4, 00:20:54.533 "num_base_bdevs_discovered": 2, 00:20:54.533 "num_base_bdevs_operational": 4, 00:20:54.533 "base_bdevs_list": [ 00:20:54.533 { 00:20:54.533 "name": "BaseBdev1", 00:20:54.533 "uuid": "2317d608-a13e-473b-9bfd-b869ebc2cd4d", 00:20:54.533 "is_configured": true, 00:20:54.533 "data_offset": 2048, 00:20:54.533 "data_size": 63488 00:20:54.533 }, 00:20:54.533 { 00:20:54.533 "name": "BaseBdev2", 00:20:54.533 "uuid": "c3e873da-1346-4f00-902c-5b2445581374", 00:20:54.533 "is_configured": true, 00:20:54.533 "data_offset": 2048, 00:20:54.533 "data_size": 63488 00:20:54.533 }, 00:20:54.533 { 00:20:54.533 "name": "BaseBdev3", 00:20:54.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.533 "is_configured": false, 00:20:54.533 "data_offset": 0, 00:20:54.533 "data_size": 0 00:20:54.533 }, 00:20:54.533 { 00:20:54.533 "name": "BaseBdev4", 00:20:54.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.533 "is_configured": false, 00:20:54.533 "data_offset": 0, 00:20:54.533 "data_size": 0 00:20:54.533 } 00:20:54.533 ] 00:20:54.533 }' 00:20:54.533 12:04:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.533 12:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:55.097 12:05:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:55.354 [2024-11-29 12:05:00.743933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:55.354 BaseBdev3 00:20:55.354 12:05:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:55.354 12:05:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:55.354 12:05:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:55.354 12:05:00 -- common/autotest_common.sh@899 -- # local i 00:20:55.354 12:05:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:55.354 12:05:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:55.354 12:05:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.612 12:05:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:55.870 [ 00:20:55.870 { 00:20:55.870 "name": "BaseBdev3", 00:20:55.870 "aliases": [ 00:20:55.870 "b021ffff-ea15-4f91-a05a-fad387b39827" 00:20:55.870 ], 00:20:55.870 "product_name": "Malloc disk", 00:20:55.870 "block_size": 512, 00:20:55.870 "num_blocks": 65536, 00:20:55.870 "uuid": "b021ffff-ea15-4f91-a05a-fad387b39827", 00:20:55.870 "assigned_rate_limits": { 00:20:55.870 "rw_ios_per_sec": 0, 00:20:55.870 "rw_mbytes_per_sec": 0, 00:20:55.870 "r_mbytes_per_sec": 0, 00:20:55.870 "w_mbytes_per_sec": 0 00:20:55.870 }, 00:20:55.870 "claimed": true, 00:20:55.870 "claim_type": "exclusive_write", 00:20:55.870 "zoned": false, 00:20:55.870 "supported_io_types": { 00:20:55.870 "read": true, 00:20:55.870 "write": true, 00:20:55.870 "unmap": true, 00:20:55.870 "write_zeroes": true, 00:20:55.870 "flush": true, 00:20:55.870 "reset": true, 00:20:55.870 "compare": false, 00:20:55.870 "compare_and_write": false, 00:20:55.870 "abort": true, 00:20:55.870 "nvme_admin": false, 00:20:55.870 "nvme_io": false 00:20:55.870 }, 00:20:55.870 "memory_domains": [ 00:20:55.870 { 00:20:55.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.870 "dma_device_type": 2 00:20:55.870 } 00:20:55.870 ], 00:20:55.870 "driver_specific": {} 00:20:55.870 } 00:20:55.870 ] 00:20:55.870 12:05:01 -- common/autotest_common.sh@905 -- # return 0 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.870 12:05:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.129 12:05:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.129 "name": "Existed_Raid", 00:20:56.129 "uuid": "e3a65d3b-0f60-461c-9559-bc8573548503", 00:20:56.129 "strip_size_kb": 64, 00:20:56.129 "state": "configuring", 00:20:56.129 "raid_level": "raid0", 00:20:56.129 "superblock": true, 00:20:56.129 "num_base_bdevs": 4, 00:20:56.129 "num_base_bdevs_discovered": 3, 00:20:56.129 "num_base_bdevs_operational": 4, 00:20:56.129 "base_bdevs_list": [ 00:20:56.129 { 00:20:56.129 "name": "BaseBdev1", 00:20:56.129 "uuid": "2317d608-a13e-473b-9bfd-b869ebc2cd4d", 00:20:56.129 "is_configured": true, 00:20:56.129 "data_offset": 2048, 00:20:56.129 "data_size": 63488 00:20:56.129 }, 00:20:56.129 { 00:20:56.129 "name": "BaseBdev2", 00:20:56.129 "uuid": "c3e873da-1346-4f00-902c-5b2445581374", 00:20:56.129 "is_configured": true, 00:20:56.129 "data_offset": 2048, 00:20:56.129 "data_size": 63488 00:20:56.129 }, 00:20:56.129 { 00:20:56.129 "name": "BaseBdev3", 00:20:56.129 "uuid": "b021ffff-ea15-4f91-a05a-fad387b39827", 00:20:56.129 "is_configured": true, 00:20:56.129 "data_offset": 2048, 00:20:56.129 "data_size": 63488 00:20:56.129 }, 00:20:56.129 { 00:20:56.129 "name": "BaseBdev4", 00:20:56.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.129 "is_configured": false, 00:20:56.129 "data_offset": 0, 00:20:56.129 "data_size": 0 00:20:56.129 } 00:20:56.129 ] 00:20:56.129 }' 00:20:56.129 12:05:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.129 12:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:57.065 12:05:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:57.323 [2024-11-29 12:05:02.589721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:57.323 [2024-11-29 12:05:02.590302] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:20:57.323 [2024-11-29 12:05:02.590489] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:57.323 [2024-11-29 12:05:02.590687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:20:57.323 [2024-11-29 12:05:02.591150] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:20:57.323 [2024-11-29 12:05:02.591283] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:20:57.323 BaseBdev4 00:20:57.323 [2024-11-29 12:05:02.591561] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.323 12:05:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:57.323 12:05:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:57.323 12:05:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:57.323 12:05:02 -- common/autotest_common.sh@899 -- # local i 00:20:57.323 12:05:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:57.323 12:05:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:57.323 12:05:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.581 12:05:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:57.840 [ 00:20:57.840 { 00:20:57.840 "name": "BaseBdev4", 00:20:57.840 "aliases": [ 00:20:57.840 "1de9b615-c581-4912-b3c8-aacdd03a4917" 00:20:57.840 ], 00:20:57.840 "product_name": "Malloc disk", 00:20:57.840 "block_size": 512, 00:20:57.840 "num_blocks": 65536, 00:20:57.840 "uuid": "1de9b615-c581-4912-b3c8-aacdd03a4917", 00:20:57.840 "assigned_rate_limits": { 00:20:57.840 "rw_ios_per_sec": 0, 00:20:57.840 "rw_mbytes_per_sec": 0, 00:20:57.840 "r_mbytes_per_sec": 0, 00:20:57.840 "w_mbytes_per_sec": 0 00:20:57.840 }, 00:20:57.840 "claimed": true, 00:20:57.840 "claim_type": "exclusive_write", 00:20:57.840 "zoned": false, 00:20:57.840 "supported_io_types": { 00:20:57.840 "read": true, 00:20:57.840 "write": true, 00:20:57.840 "unmap": true, 00:20:57.840 "write_zeroes": true, 00:20:57.840 "flush": true, 00:20:57.840 "reset": true, 00:20:57.840 "compare": false, 00:20:57.840 "compare_and_write": false, 00:20:57.840 "abort": true, 00:20:57.840 "nvme_admin": false, 00:20:57.840 "nvme_io": false 00:20:57.840 }, 00:20:57.840 "memory_domains": [ 00:20:57.840 { 00:20:57.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.840 "dma_device_type": 2 00:20:57.840 } 00:20:57.840 ], 00:20:57.840 "driver_specific": {} 00:20:57.840 } 00:20:57.840 ] 00:20:57.840 12:05:03 -- common/autotest_common.sh@905 -- # return 0 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.840 12:05:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.099 12:05:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.099 "name": "Existed_Raid", 00:20:58.099 "uuid": "e3a65d3b-0f60-461c-9559-bc8573548503", 00:20:58.099 "strip_size_kb": 64, 00:20:58.099 "state": "online", 00:20:58.099 "raid_level": "raid0", 00:20:58.099 "superblock": true, 00:20:58.099 "num_base_bdevs": 4, 00:20:58.099 "num_base_bdevs_discovered": 4, 00:20:58.099 "num_base_bdevs_operational": 4, 00:20:58.099 "base_bdevs_list": [ 00:20:58.099 { 00:20:58.099 "name": "BaseBdev1", 00:20:58.099 "uuid": "2317d608-a13e-473b-9bfd-b869ebc2cd4d", 00:20:58.099 "is_configured": true, 00:20:58.099 "data_offset": 2048, 00:20:58.099 "data_size": 63488 00:20:58.099 }, 00:20:58.099 { 00:20:58.099 "name": "BaseBdev2", 00:20:58.099 "uuid": "c3e873da-1346-4f00-902c-5b2445581374", 00:20:58.099 "is_configured": true, 00:20:58.099 "data_offset": 2048, 00:20:58.099 "data_size": 63488 00:20:58.099 }, 00:20:58.099 { 00:20:58.099 "name": "BaseBdev3", 00:20:58.099 "uuid": "b021ffff-ea15-4f91-a05a-fad387b39827", 00:20:58.099 "is_configured": true, 00:20:58.099 "data_offset": 2048, 00:20:58.099 "data_size": 63488 00:20:58.099 }, 00:20:58.099 { 00:20:58.099 "name": "BaseBdev4", 00:20:58.099 "uuid": "1de9b615-c581-4912-b3c8-aacdd03a4917", 00:20:58.099 "is_configured": true, 00:20:58.099 "data_offset": 2048, 00:20:58.099 "data_size": 63488 00:20:58.099 } 00:20:58.099 ] 00:20:58.099 }' 00:20:58.099 12:05:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.099 12:05:03 -- common/autotest_common.sh@10 -- # set +x 00:20:58.665 12:05:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:58.925 [2024-11-29 12:05:04.318288] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:58.925 [2024-11-29 12:05:04.318652] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.925 [2024-11-29 12:05:04.318842] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.925 12:05:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.183 12:05:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.184 "name": "Existed_Raid", 00:20:59.184 "uuid": "e3a65d3b-0f60-461c-9559-bc8573548503", 00:20:59.184 "strip_size_kb": 64, 00:20:59.184 "state": "offline", 00:20:59.184 "raid_level": "raid0", 00:20:59.184 "superblock": true, 00:20:59.184 "num_base_bdevs": 4, 00:20:59.184 "num_base_bdevs_discovered": 3, 00:20:59.184 "num_base_bdevs_operational": 3, 00:20:59.184 "base_bdevs_list": [ 00:20:59.184 { 00:20:59.184 "name": null, 00:20:59.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.184 "is_configured": false, 00:20:59.184 "data_offset": 2048, 00:20:59.184 "data_size": 63488 00:20:59.184 }, 00:20:59.184 { 00:20:59.184 "name": "BaseBdev2", 00:20:59.184 "uuid": "c3e873da-1346-4f00-902c-5b2445581374", 00:20:59.184 "is_configured": true, 00:20:59.184 "data_offset": 2048, 00:20:59.184 "data_size": 63488 00:20:59.184 }, 00:20:59.184 { 00:20:59.184 "name": "BaseBdev3", 00:20:59.184 "uuid": "b021ffff-ea15-4f91-a05a-fad387b39827", 00:20:59.184 "is_configured": true, 00:20:59.184 "data_offset": 2048, 00:20:59.184 "data_size": 63488 00:20:59.184 }, 00:20:59.184 { 00:20:59.184 "name": "BaseBdev4", 00:20:59.184 "uuid": "1de9b615-c581-4912-b3c8-aacdd03a4917", 00:20:59.184 "is_configured": true, 00:20:59.184 "data_offset": 2048, 00:20:59.184 "data_size": 63488 00:20:59.184 } 00:20:59.184 ] 00:20:59.184 }' 00:20:59.184 12:05:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.184 12:05:04 -- common/autotest_common.sh@10 -- # set +x 00:20:59.750 12:05:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:59.750 12:05:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:59.750 12:05:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.750 12:05:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:00.008 12:05:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:00.008 12:05:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.008 12:05:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:00.273 [2024-11-29 12:05:05.676069] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:00.273 12:05:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:00.273 12:05:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:00.273 12:05:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.273 12:05:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:00.543 12:05:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:00.543 12:05:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.543 12:05:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:00.801 [2024-11-29 12:05:06.206671] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:00.801 12:05:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:00.801 12:05:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:00.801 12:05:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.801 12:05:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:01.059 12:05:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:01.059 12:05:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:01.059 12:05:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:01.318 [2024-11-29 12:05:06.701122] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:01.318 [2024-11-29 12:05:06.701492] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:21:01.318 12:05:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:01.318 12:05:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:01.318 12:05:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.318 12:05:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:01.577 12:05:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:01.577 12:05:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:01.577 12:05:06 -- bdev/bdev_raid.sh@287 -- # killprocess 130198 00:21:01.577 12:05:06 -- common/autotest_common.sh@936 -- # '[' -z 130198 ']' 00:21:01.577 12:05:06 -- common/autotest_common.sh@940 -- # kill -0 130198 00:21:01.577 12:05:06 -- common/autotest_common.sh@941 -- # uname 00:21:01.577 12:05:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.577 12:05:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130198 00:21:01.577 12:05:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:01.577 12:05:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:01.577 12:05:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130198' 00:21:01.577 killing process with pid 130198 00:21:01.577 12:05:06 -- common/autotest_common.sh@955 -- # kill 130198 00:21:01.577 12:05:06 -- common/autotest_common.sh@960 -- # wait 130198 00:21:01.577 [2024-11-29 12:05:06.994317] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:01.577 [2024-11-29 12:05:06.994431] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:01.836 00:21:01.836 real 0m15.651s 00:21:01.836 user 0m28.953s 00:21:01.836 sys 0m2.024s 00:21:01.836 12:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:01.836 12:05:07 -- common/autotest_common.sh@10 -- # set +x 00:21:01.836 ************************************ 00:21:01.836 END TEST raid_state_function_test_sb 00:21:01.836 ************************************ 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:21:01.836 12:05:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:01.836 12:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.836 12:05:07 -- common/autotest_common.sh@10 -- # set +x 00:21:01.836 ************************************ 00:21:01.836 START TEST raid_superblock_test 00:21:01.836 ************************************ 00:21:01.836 12:05:07 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@357 -- # raid_pid=130653 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:01.836 12:05:07 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130653 /var/tmp/spdk-raid.sock 00:21:01.836 12:05:07 -- common/autotest_common.sh@829 -- # '[' -z 130653 ']' 00:21:01.836 12:05:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.836 12:05:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.836 12:05:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.836 12:05:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.836 12:05:07 -- common/autotest_common.sh@10 -- # set +x 00:21:02.095 [2024-11-29 12:05:07.356121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:02.095 [2024-11-29 12:05:07.356622] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130653 ] 00:21:02.095 [2024-11-29 12:05:07.498528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.095 [2024-11-29 12:05:07.593990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.353 [2024-11-29 12:05:07.648672] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.937 12:05:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.937 12:05:08 -- common/autotest_common.sh@862 -- # return 0 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.937 12:05:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:03.196 malloc1 00:21:03.196 12:05:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:03.455 [2024-11-29 12:05:08.784646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:03.455 [2024-11-29 12:05:08.785044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.455 [2024-11-29 12:05:08.785286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:03.455 [2024-11-29 12:05:08.785502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.455 [2024-11-29 12:05:08.789369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.455 [2024-11-29 12:05:08.789598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:03.455 pt1 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:03.455 12:05:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:03.714 malloc2 00:21:03.714 12:05:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:03.972 [2024-11-29 12:05:09.288938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:03.972 [2024-11-29 12:05:09.289340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.972 [2024-11-29 12:05:09.289440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:03.972 [2024-11-29 12:05:09.289617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.972 [2024-11-29 12:05:09.292587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.972 [2024-11-29 12:05:09.292770] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:03.972 pt2 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:03.972 12:05:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:03.973 12:05:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:04.231 malloc3 00:21:04.231 12:05:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:04.489 [2024-11-29 12:05:09.850202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:04.489 [2024-11-29 12:05:09.850568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.489 [2024-11-29 12:05:09.850743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:04.489 [2024-11-29 12:05:09.850917] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.489 [2024-11-29 12:05:09.853944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.489 [2024-11-29 12:05:09.854129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:04.489 pt3 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.489 12:05:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:04.748 malloc4 00:21:04.748 12:05:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:05.006 [2024-11-29 12:05:10.409365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:05.006 [2024-11-29 12:05:10.409791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.006 [2024-11-29 12:05:10.409918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:05.006 [2024-11-29 12:05:10.410201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.006 [2024-11-29 12:05:10.413074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.006 [2024-11-29 12:05:10.413259] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:05.006 pt4 00:21:05.006 12:05:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:05.006 12:05:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:05.006 12:05:10 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:05.266 [2024-11-29 12:05:10.641883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:05.266 [2024-11-29 12:05:10.644724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:05.266 [2024-11-29 12:05:10.644929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:05.266 [2024-11-29 12:05:10.645035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:05.266 [2024-11-29 12:05:10.645474] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:21:05.266 [2024-11-29 12:05:10.645599] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:05.266 [2024-11-29 12:05:10.645820] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:05.266 [2024-11-29 12:05:10.646535] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:21:05.266 [2024-11-29 12:05:10.646661] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:21:05.266 [2024-11-29 12:05:10.647009] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.266 12:05:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.524 12:05:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.524 "name": "raid_bdev1", 00:21:05.524 "uuid": "42dceb6e-9bd8-4c71-b099-db42f2acd66e", 00:21:05.524 "strip_size_kb": 64, 00:21:05.524 "state": "online", 00:21:05.524 "raid_level": "raid0", 00:21:05.524 "superblock": true, 00:21:05.524 "num_base_bdevs": 4, 00:21:05.524 "num_base_bdevs_discovered": 4, 00:21:05.524 "num_base_bdevs_operational": 4, 00:21:05.524 "base_bdevs_list": [ 00:21:05.524 { 00:21:05.524 "name": "pt1", 00:21:05.524 "uuid": "330116ff-5da4-5893-ad06-00b77970de00", 00:21:05.524 "is_configured": true, 00:21:05.524 "data_offset": 2048, 00:21:05.524 "data_size": 63488 00:21:05.524 }, 00:21:05.524 { 00:21:05.524 "name": "pt2", 00:21:05.524 "uuid": "7ea0b406-425d-50c4-a390-0aa6e4338494", 00:21:05.524 "is_configured": true, 00:21:05.524 "data_offset": 2048, 00:21:05.524 "data_size": 63488 00:21:05.524 }, 00:21:05.524 { 00:21:05.524 "name": "pt3", 00:21:05.524 "uuid": "e665e657-f7c6-5616-ac5b-efd9e4a7a823", 00:21:05.524 "is_configured": true, 00:21:05.524 "data_offset": 2048, 00:21:05.524 "data_size": 63488 00:21:05.524 }, 00:21:05.524 { 00:21:05.524 "name": "pt4", 00:21:05.524 "uuid": "7a18d892-aaed-5905-9ef3-4ba3529bf112", 00:21:05.524 "is_configured": true, 00:21:05.524 "data_offset": 2048, 00:21:05.524 "data_size": 63488 00:21:05.524 } 00:21:05.524 ] 00:21:05.524 }' 00:21:05.524 12:05:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.524 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 12:05:11 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:06.458 12:05:11 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:06.458 [2024-11-29 12:05:11.875694] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.458 12:05:11 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=42dceb6e-9bd8-4c71-b099-db42f2acd66e 00:21:06.458 12:05:11 -- bdev/bdev_raid.sh@380 -- # '[' -z 42dceb6e-9bd8-4c71-b099-db42f2acd66e ']' 00:21:06.458 12:05:11 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.716 [2024-11-29 12:05:12.211396] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.716 [2024-11-29 12:05:12.211666] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.716 [2024-11-29 12:05:12.211950] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.716 [2024-11-29 12:05:12.212183] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.716 [2024-11-29 12:05:12.212385] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:21:06.973 12:05:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:06.973 12:05:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.231 12:05:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:07.231 12:05:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:07.231 12:05:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.231 12:05:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:07.489 12:05:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.489 12:05:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.747 12:05:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.747 12:05:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:08.006 12:05:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:08.006 12:05:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:08.578 12:05:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:08.578 12:05:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:08.836 12:05:14 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:08.836 12:05:14 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:08.836 12:05:14 -- common/autotest_common.sh@650 -- # local es=0 00:21:08.836 12:05:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:08.836 12:05:14 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.836 12:05:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.836 12:05:14 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.836 12:05:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.836 12:05:14 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.836 12:05:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.836 12:05:14 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.836 12:05:14 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:08.836 12:05:14 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:09.094 [2024-11-29 12:05:14.439824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:09.094 [2024-11-29 12:05:14.442801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:09.094 [2024-11-29 12:05:14.443005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:09.094 [2024-11-29 12:05:14.443179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:09.094 [2024-11-29 12:05:14.443385] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:09.094 [2024-11-29 12:05:14.443612] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:09.094 [2024-11-29 12:05:14.443778] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:09.094 [2024-11-29 12:05:14.443969] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:09.094 [2024-11-29 12:05:14.444172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:09.094 [2024-11-29 12:05:14.444436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:21:09.094 request: 00:21:09.094 { 00:21:09.094 "name": "raid_bdev1", 00:21:09.094 "raid_level": "raid0", 00:21:09.094 "base_bdevs": [ 00:21:09.094 "malloc1", 00:21:09.094 "malloc2", 00:21:09.094 "malloc3", 00:21:09.094 "malloc4" 00:21:09.094 ], 00:21:09.094 "superblock": false, 00:21:09.094 "strip_size_kb": 64, 00:21:09.094 "method": "bdev_raid_create", 00:21:09.094 "req_id": 1 00:21:09.094 } 00:21:09.094 Got JSON-RPC error response 00:21:09.094 response: 00:21:09.094 { 00:21:09.094 "code": -17, 00:21:09.094 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:09.094 } 00:21:09.094 12:05:14 -- common/autotest_common.sh@653 -- # es=1 00:21:09.094 12:05:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.094 12:05:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.094 12:05:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.094 12:05:14 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.094 12:05:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:09.352 12:05:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:09.352 12:05:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:09.352 12:05:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:09.610 [2024-11-29 12:05:15.101282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:09.610 [2024-11-29 12:05:15.101804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.610 [2024-11-29 12:05:15.102053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:09.610 [2024-11-29 12:05:15.102279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.610 [2024-11-29 12:05:15.105963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.610 [2024-11-29 12:05:15.106203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:09.610 [2024-11-29 12:05:15.106512] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:09.610 [2024-11-29 12:05:15.106747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:09.610 pt1 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.866 12:05:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.124 12:05:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.124 "name": "raid_bdev1", 00:21:10.124 "uuid": "42dceb6e-9bd8-4c71-b099-db42f2acd66e", 00:21:10.124 "strip_size_kb": 64, 00:21:10.124 "state": "configuring", 00:21:10.124 "raid_level": "raid0", 00:21:10.124 "superblock": true, 00:21:10.124 "num_base_bdevs": 4, 00:21:10.124 "num_base_bdevs_discovered": 1, 00:21:10.124 "num_base_bdevs_operational": 4, 00:21:10.124 "base_bdevs_list": [ 00:21:10.124 { 00:21:10.124 "name": "pt1", 00:21:10.124 "uuid": "330116ff-5da4-5893-ad06-00b77970de00", 00:21:10.124 "is_configured": true, 00:21:10.124 "data_offset": 2048, 00:21:10.124 "data_size": 63488 00:21:10.124 }, 00:21:10.124 { 00:21:10.124 "name": null, 00:21:10.124 "uuid": "7ea0b406-425d-50c4-a390-0aa6e4338494", 00:21:10.124 "is_configured": false, 00:21:10.124 "data_offset": 2048, 00:21:10.124 "data_size": 63488 00:21:10.124 }, 00:21:10.124 { 00:21:10.124 "name": null, 00:21:10.124 "uuid": "e665e657-f7c6-5616-ac5b-efd9e4a7a823", 00:21:10.124 "is_configured": false, 00:21:10.124 "data_offset": 2048, 00:21:10.124 "data_size": 63488 00:21:10.124 }, 00:21:10.124 { 00:21:10.124 "name": null, 00:21:10.124 "uuid": "7a18d892-aaed-5905-9ef3-4ba3529bf112", 00:21:10.124 "is_configured": false, 00:21:10.124 "data_offset": 2048, 00:21:10.124 "data_size": 63488 00:21:10.124 } 00:21:10.124 ] 00:21:10.124 }' 00:21:10.124 12:05:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.124 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:10.690 12:05:16 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:10.690 12:05:16 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:10.947 [2024-11-29 12:05:16.415092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:10.947 [2024-11-29 12:05:16.415571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.947 [2024-11-29 12:05:16.415771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:10.947 [2024-11-29 12:05:16.415947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.947 [2024-11-29 12:05:16.416646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.947 [2024-11-29 12:05:16.416839] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:10.947 [2024-11-29 12:05:16.417125] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:10.947 [2024-11-29 12:05:16.417309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.947 pt2 00:21:10.947 12:05:16 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:11.550 [2024-11-29 12:05:16.731098] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.550 12:05:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.550 12:05:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.550 "name": "raid_bdev1", 00:21:11.550 "uuid": "42dceb6e-9bd8-4c71-b099-db42f2acd66e", 00:21:11.550 "strip_size_kb": 64, 00:21:11.550 "state": "configuring", 00:21:11.550 "raid_level": "raid0", 00:21:11.550 "superblock": true, 00:21:11.550 "num_base_bdevs": 4, 00:21:11.550 "num_base_bdevs_discovered": 1, 00:21:11.550 "num_base_bdevs_operational": 4, 00:21:11.550 "base_bdevs_list": [ 00:21:11.550 { 00:21:11.550 "name": "pt1", 00:21:11.550 "uuid": "330116ff-5da4-5893-ad06-00b77970de00", 00:21:11.550 "is_configured": true, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "name": null, 00:21:11.550 "uuid": "7ea0b406-425d-50c4-a390-0aa6e4338494", 00:21:11.550 "is_configured": false, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "name": null, 00:21:11.550 "uuid": "e665e657-f7c6-5616-ac5b-efd9e4a7a823", 00:21:11.550 "is_configured": false, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "name": null, 00:21:11.550 "uuid": "7a18d892-aaed-5905-9ef3-4ba3529bf112", 00:21:11.550 "is_configured": false, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 } 00:21:11.550 ] 00:21:11.550 }' 00:21:11.550 12:05:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.550 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:21:12.487 12:05:17 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:12.487 12:05:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:12.487 12:05:17 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:12.487 [2024-11-29 12:05:17.898934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:12.487 [2024-11-29 12:05:17.899393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.487 [2024-11-29 12:05:17.899493] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:12.487 [2024-11-29 12:05:17.899671] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.487 [2024-11-29 12:05:17.900246] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.487 [2024-11-29 12:05:17.900443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:12.487 [2024-11-29 12:05:17.900655] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:12.487 [2024-11-29 12:05:17.900787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:12.487 pt2 00:21:12.487 12:05:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:12.487 12:05:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:12.487 12:05:17 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:12.745 [2024-11-29 12:05:18.179011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:12.745 [2024-11-29 12:05:18.179460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.745 [2024-11-29 12:05:18.179643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:12.745 [2024-11-29 12:05:18.179784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.745 [2024-11-29 12:05:18.180401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.745 [2024-11-29 12:05:18.180582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:12.745 [2024-11-29 12:05:18.180862] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:12.745 [2024-11-29 12:05:18.180943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:12.745 pt3 00:21:12.745 12:05:18 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:12.745 12:05:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:12.745 12:05:18 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:13.004 [2024-11-29 12:05:18.459055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:13.004 [2024-11-29 12:05:18.459467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.004 [2024-11-29 12:05:18.459551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:13.004 [2024-11-29 12:05:18.459807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.004 [2024-11-29 12:05:18.460346] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.004 [2024-11-29 12:05:18.460547] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:13.004 [2024-11-29 12:05:18.460759] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:13.004 [2024-11-29 12:05:18.460915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:13.004 [2024-11-29 12:05:18.461110] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:13.004 [2024-11-29 12:05:18.461229] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:13.004 [2024-11-29 12:05:18.461364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:21:13.004 [2024-11-29 12:05:18.461860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:13.004 [2024-11-29 12:05:18.462006] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:13.004 [2024-11-29 12:05:18.462231] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.004 pt4 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.004 12:05:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.263 12:05:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.263 "name": "raid_bdev1", 00:21:13.263 "uuid": "42dceb6e-9bd8-4c71-b099-db42f2acd66e", 00:21:13.263 "strip_size_kb": 64, 00:21:13.263 "state": "online", 00:21:13.263 "raid_level": "raid0", 00:21:13.263 "superblock": true, 00:21:13.263 "num_base_bdevs": 4, 00:21:13.263 "num_base_bdevs_discovered": 4, 00:21:13.263 "num_base_bdevs_operational": 4, 00:21:13.263 "base_bdevs_list": [ 00:21:13.263 { 00:21:13.263 "name": "pt1", 00:21:13.263 "uuid": "330116ff-5da4-5893-ad06-00b77970de00", 00:21:13.263 "is_configured": true, 00:21:13.263 "data_offset": 2048, 00:21:13.263 "data_size": 63488 00:21:13.263 }, 00:21:13.263 { 00:21:13.263 "name": "pt2", 00:21:13.263 "uuid": "7ea0b406-425d-50c4-a390-0aa6e4338494", 00:21:13.263 "is_configured": true, 00:21:13.263 "data_offset": 2048, 00:21:13.263 "data_size": 63488 00:21:13.263 }, 00:21:13.263 { 00:21:13.263 "name": "pt3", 00:21:13.263 "uuid": "e665e657-f7c6-5616-ac5b-efd9e4a7a823", 00:21:13.263 "is_configured": true, 00:21:13.263 "data_offset": 2048, 00:21:13.263 "data_size": 63488 00:21:13.263 }, 00:21:13.263 { 00:21:13.263 "name": "pt4", 00:21:13.263 "uuid": "7a18d892-aaed-5905-9ef3-4ba3529bf112", 00:21:13.263 "is_configured": true, 00:21:13.263 "data_offset": 2048, 00:21:13.263 "data_size": 63488 00:21:13.263 } 00:21:13.263 ] 00:21:13.263 }' 00:21:13.263 12:05:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.263 12:05:18 -- common/autotest_common.sh@10 -- # set +x 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.199 [2024-11-29 12:05:19.635255] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@430 -- # '[' 42dceb6e-9bd8-4c71-b099-db42f2acd66e '!=' 42dceb6e-9bd8-4c71-b099-db42f2acd66e ']' 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:14.199 12:05:19 -- bdev/bdev_raid.sh@511 -- # killprocess 130653 00:21:14.199 12:05:19 -- common/autotest_common.sh@936 -- # '[' -z 130653 ']' 00:21:14.199 12:05:19 -- common/autotest_common.sh@940 -- # kill -0 130653 00:21:14.199 12:05:19 -- common/autotest_common.sh@941 -- # uname 00:21:14.199 12:05:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.199 12:05:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130653 00:21:14.199 killing process with pid 130653 00:21:14.199 12:05:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:14.199 12:05:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:14.199 12:05:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130653' 00:21:14.199 12:05:19 -- common/autotest_common.sh@955 -- # kill 130653 00:21:14.199 12:05:19 -- common/autotest_common.sh@960 -- # wait 130653 00:21:14.199 [2024-11-29 12:05:19.681431] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.199 [2024-11-29 12:05:19.681563] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.199 [2024-11-29 12:05:19.681657] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.199 [2024-11-29 12:05:19.681670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:14.459 [2024-11-29 12:05:19.741370] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:14.717 ************************************ 00:21:14.717 END TEST raid_superblock_test 00:21:14.717 ************************************ 00:21:14.717 12:05:19 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:14.717 00:21:14.717 real 0m12.693s 00:21:14.717 user 0m23.097s 00:21:14.717 sys 0m1.623s 00:21:14.717 12:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:14.717 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:14.717 12:05:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:14.717 12:05:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.717 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:21:14.717 ************************************ 00:21:14.717 START TEST raid_state_function_test 00:21:14.717 ************************************ 00:21:14.717 12:05:20 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=130994 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130994' 00:21:14.717 Process raid pid: 130994 00:21:14.717 12:05:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130994 /var/tmp/spdk-raid.sock 00:21:14.717 12:05:20 -- common/autotest_common.sh@829 -- # '[' -z 130994 ']' 00:21:14.717 12:05:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:14.717 12:05:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.717 12:05:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:14.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:14.717 12:05:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.717 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:21:14.717 [2024-11-29 12:05:20.121308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:14.717 [2024-11-29 12:05:20.121946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.974 [2024-11-29 12:05:20.281097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.974 [2024-11-29 12:05:20.394116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.974 [2024-11-29 12:05:20.460439] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.910 12:05:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.910 12:05:21 -- common/autotest_common.sh@862 -- # return 0 00:21:15.910 12:05:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:15.910 [2024-11-29 12:05:21.384674] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.910 [2024-11-29 12:05:21.385453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.911 [2024-11-29 12:05:21.385643] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.911 [2024-11-29 12:05:21.385718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.911 [2024-11-29 12:05:21.385754] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:15.911 [2024-11-29 12:05:21.385971] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:15.911 [2024-11-29 12:05:21.386021] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:15.911 [2024-11-29 12:05:21.386078] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.911 12:05:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.477 12:05:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.477 "name": "Existed_Raid", 00:21:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.477 "strip_size_kb": 64, 00:21:16.477 "state": "configuring", 00:21:16.477 "raid_level": "concat", 00:21:16.477 "superblock": false, 00:21:16.477 "num_base_bdevs": 4, 00:21:16.477 "num_base_bdevs_discovered": 0, 00:21:16.477 "num_base_bdevs_operational": 4, 00:21:16.477 "base_bdevs_list": [ 00:21:16.477 { 00:21:16.477 "name": "BaseBdev1", 00:21:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.477 "is_configured": false, 00:21:16.477 "data_offset": 0, 00:21:16.477 "data_size": 0 00:21:16.477 }, 00:21:16.477 { 00:21:16.477 "name": "BaseBdev2", 00:21:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.477 "is_configured": false, 00:21:16.477 "data_offset": 0, 00:21:16.477 "data_size": 0 00:21:16.477 }, 00:21:16.477 { 00:21:16.477 "name": "BaseBdev3", 00:21:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.477 "is_configured": false, 00:21:16.477 "data_offset": 0, 00:21:16.477 "data_size": 0 00:21:16.477 }, 00:21:16.477 { 00:21:16.477 "name": "BaseBdev4", 00:21:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.477 "is_configured": false, 00:21:16.477 "data_offset": 0, 00:21:16.477 "data_size": 0 00:21:16.477 } 00:21:16.477 ] 00:21:16.477 }' 00:21:16.477 12:05:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.477 12:05:21 -- common/autotest_common.sh@10 -- # set +x 00:21:17.044 12:05:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:17.303 [2024-11-29 12:05:22.628733] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.303 [2024-11-29 12:05:22.629084] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:21:17.303 12:05:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:17.561 [2024-11-29 12:05:22.864830] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.561 [2024-11-29 12:05:22.865185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.561 [2024-11-29 12:05:22.865311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.561 [2024-11-29 12:05:22.865386] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.561 [2024-11-29 12:05:22.865633] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:17.561 [2024-11-29 12:05:22.865700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:17.561 [2024-11-29 12:05:22.865734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:17.561 [2024-11-29 12:05:22.865869] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:17.561 12:05:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.820 [2024-11-29 12:05:23.156903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.820 BaseBdev1 00:21:17.820 12:05:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:17.820 12:05:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:17.820 12:05:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:17.820 12:05:23 -- common/autotest_common.sh@899 -- # local i 00:21:17.820 12:05:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:17.820 12:05:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:17.820 12:05:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.077 12:05:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:18.336 [ 00:21:18.336 { 00:21:18.336 "name": "BaseBdev1", 00:21:18.336 "aliases": [ 00:21:18.336 "260ab773-93c8-4f04-a1e6-7bef0ca405a0" 00:21:18.336 ], 00:21:18.336 "product_name": "Malloc disk", 00:21:18.336 "block_size": 512, 00:21:18.336 "num_blocks": 65536, 00:21:18.336 "uuid": "260ab773-93c8-4f04-a1e6-7bef0ca405a0", 00:21:18.336 "assigned_rate_limits": { 00:21:18.336 "rw_ios_per_sec": 0, 00:21:18.336 "rw_mbytes_per_sec": 0, 00:21:18.336 "r_mbytes_per_sec": 0, 00:21:18.336 "w_mbytes_per_sec": 0 00:21:18.336 }, 00:21:18.336 "claimed": true, 00:21:18.336 "claim_type": "exclusive_write", 00:21:18.336 "zoned": false, 00:21:18.336 "supported_io_types": { 00:21:18.336 "read": true, 00:21:18.336 "write": true, 00:21:18.336 "unmap": true, 00:21:18.336 "write_zeroes": true, 00:21:18.336 "flush": true, 00:21:18.336 "reset": true, 00:21:18.336 "compare": false, 00:21:18.336 "compare_and_write": false, 00:21:18.336 "abort": true, 00:21:18.336 "nvme_admin": false, 00:21:18.336 "nvme_io": false 00:21:18.336 }, 00:21:18.336 "memory_domains": [ 00:21:18.336 { 00:21:18.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.336 "dma_device_type": 2 00:21:18.336 } 00:21:18.336 ], 00:21:18.336 "driver_specific": {} 00:21:18.336 } 00:21:18.336 ] 00:21:18.336 12:05:23 -- common/autotest_common.sh@905 -- # return 0 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.336 12:05:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.594 12:05:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.594 "name": "Existed_Raid", 00:21:18.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.594 "strip_size_kb": 64, 00:21:18.594 "state": "configuring", 00:21:18.594 "raid_level": "concat", 00:21:18.594 "superblock": false, 00:21:18.594 "num_base_bdevs": 4, 00:21:18.594 "num_base_bdevs_discovered": 1, 00:21:18.594 "num_base_bdevs_operational": 4, 00:21:18.594 "base_bdevs_list": [ 00:21:18.594 { 00:21:18.594 "name": "BaseBdev1", 00:21:18.594 "uuid": "260ab773-93c8-4f04-a1e6-7bef0ca405a0", 00:21:18.595 "is_configured": true, 00:21:18.595 "data_offset": 0, 00:21:18.595 "data_size": 65536 00:21:18.595 }, 00:21:18.595 { 00:21:18.595 "name": "BaseBdev2", 00:21:18.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.595 "is_configured": false, 00:21:18.595 "data_offset": 0, 00:21:18.595 "data_size": 0 00:21:18.595 }, 00:21:18.595 { 00:21:18.595 "name": "BaseBdev3", 00:21:18.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.595 "is_configured": false, 00:21:18.595 "data_offset": 0, 00:21:18.595 "data_size": 0 00:21:18.595 }, 00:21:18.595 { 00:21:18.595 "name": "BaseBdev4", 00:21:18.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.595 "is_configured": false, 00:21:18.595 "data_offset": 0, 00:21:18.595 "data_size": 0 00:21:18.595 } 00:21:18.595 ] 00:21:18.595 }' 00:21:18.595 12:05:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.595 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:21:19.529 12:05:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:19.529 [2024-11-29 12:05:24.973398] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:19.529 [2024-11-29 12:05:24.973499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:19.529 12:05:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:19.529 12:05:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:19.789 [2024-11-29 12:05:25.205559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:19.789 [2024-11-29 12:05:25.207987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:19.789 [2024-11-29 12:05:25.208089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:19.789 [2024-11-29 12:05:25.208103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:19.789 [2024-11-29 12:05:25.208131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:19.789 [2024-11-29 12:05:25.208140] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:19.789 [2024-11-29 12:05:25.208158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:19.789 12:05:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:19.789 12:05:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:19.789 12:05:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:19.789 12:05:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.790 12:05:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.048 12:05:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.048 "name": "Existed_Raid", 00:21:20.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.048 "strip_size_kb": 64, 00:21:20.048 "state": "configuring", 00:21:20.048 "raid_level": "concat", 00:21:20.048 "superblock": false, 00:21:20.048 "num_base_bdevs": 4, 00:21:20.048 "num_base_bdevs_discovered": 1, 00:21:20.048 "num_base_bdevs_operational": 4, 00:21:20.048 "base_bdevs_list": [ 00:21:20.048 { 00:21:20.048 "name": "BaseBdev1", 00:21:20.048 "uuid": "260ab773-93c8-4f04-a1e6-7bef0ca405a0", 00:21:20.048 "is_configured": true, 00:21:20.048 "data_offset": 0, 00:21:20.048 "data_size": 65536 00:21:20.048 }, 00:21:20.048 { 00:21:20.048 "name": "BaseBdev2", 00:21:20.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.048 "is_configured": false, 00:21:20.048 "data_offset": 0, 00:21:20.048 "data_size": 0 00:21:20.048 }, 00:21:20.048 { 00:21:20.048 "name": "BaseBdev3", 00:21:20.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.048 "is_configured": false, 00:21:20.048 "data_offset": 0, 00:21:20.048 "data_size": 0 00:21:20.048 }, 00:21:20.048 { 00:21:20.048 "name": "BaseBdev4", 00:21:20.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.048 "is_configured": false, 00:21:20.048 "data_offset": 0, 00:21:20.048 "data_size": 0 00:21:20.048 } 00:21:20.048 ] 00:21:20.048 }' 00:21:20.048 12:05:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.048 12:05:25 -- common/autotest_common.sh@10 -- # set +x 00:21:20.616 12:05:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:21.180 [2024-11-29 12:05:26.431195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.180 BaseBdev2 00:21:21.180 12:05:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:21.180 12:05:26 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:21.180 12:05:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:21.180 12:05:26 -- common/autotest_common.sh@899 -- # local i 00:21:21.180 12:05:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:21.180 12:05:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:21.180 12:05:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:21.439 12:05:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:21.698 [ 00:21:21.698 { 00:21:21.698 "name": "BaseBdev2", 00:21:21.698 "aliases": [ 00:21:21.698 "3c622f2e-3bb0-4917-9c1c-43d206178962" 00:21:21.698 ], 00:21:21.698 "product_name": "Malloc disk", 00:21:21.698 "block_size": 512, 00:21:21.698 "num_blocks": 65536, 00:21:21.698 "uuid": "3c622f2e-3bb0-4917-9c1c-43d206178962", 00:21:21.698 "assigned_rate_limits": { 00:21:21.698 "rw_ios_per_sec": 0, 00:21:21.698 "rw_mbytes_per_sec": 0, 00:21:21.698 "r_mbytes_per_sec": 0, 00:21:21.698 "w_mbytes_per_sec": 0 00:21:21.698 }, 00:21:21.698 "claimed": true, 00:21:21.698 "claim_type": "exclusive_write", 00:21:21.698 "zoned": false, 00:21:21.698 "supported_io_types": { 00:21:21.698 "read": true, 00:21:21.698 "write": true, 00:21:21.698 "unmap": true, 00:21:21.698 "write_zeroes": true, 00:21:21.698 "flush": true, 00:21:21.698 "reset": true, 00:21:21.698 "compare": false, 00:21:21.698 "compare_and_write": false, 00:21:21.698 "abort": true, 00:21:21.698 "nvme_admin": false, 00:21:21.698 "nvme_io": false 00:21:21.698 }, 00:21:21.698 "memory_domains": [ 00:21:21.698 { 00:21:21.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.698 "dma_device_type": 2 00:21:21.698 } 00:21:21.698 ], 00:21:21.698 "driver_specific": {} 00:21:21.698 } 00:21:21.698 ] 00:21:21.698 12:05:26 -- common/autotest_common.sh@905 -- # return 0 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.698 12:05:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.699 12:05:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.957 12:05:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.957 "name": "Existed_Raid", 00:21:21.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.957 "strip_size_kb": 64, 00:21:21.957 "state": "configuring", 00:21:21.957 "raid_level": "concat", 00:21:21.957 "superblock": false, 00:21:21.957 "num_base_bdevs": 4, 00:21:21.957 "num_base_bdevs_discovered": 2, 00:21:21.957 "num_base_bdevs_operational": 4, 00:21:21.957 "base_bdevs_list": [ 00:21:21.957 { 00:21:21.957 "name": "BaseBdev1", 00:21:21.957 "uuid": "260ab773-93c8-4f04-a1e6-7bef0ca405a0", 00:21:21.957 "is_configured": true, 00:21:21.957 "data_offset": 0, 00:21:21.957 "data_size": 65536 00:21:21.957 }, 00:21:21.957 { 00:21:21.957 "name": "BaseBdev2", 00:21:21.957 "uuid": "3c622f2e-3bb0-4917-9c1c-43d206178962", 00:21:21.957 "is_configured": true, 00:21:21.957 "data_offset": 0, 00:21:21.957 "data_size": 65536 00:21:21.957 }, 00:21:21.957 { 00:21:21.957 "name": "BaseBdev3", 00:21:21.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.957 "is_configured": false, 00:21:21.957 "data_offset": 0, 00:21:21.957 "data_size": 0 00:21:21.957 }, 00:21:21.957 { 00:21:21.957 "name": "BaseBdev4", 00:21:21.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.957 "is_configured": false, 00:21:21.957 "data_offset": 0, 00:21:21.957 "data_size": 0 00:21:21.957 } 00:21:21.957 ] 00:21:21.957 }' 00:21:21.957 12:05:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.958 12:05:27 -- common/autotest_common.sh@10 -- # set +x 00:21:22.893 12:05:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:22.893 [2024-11-29 12:05:28.345165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:22.893 BaseBdev3 00:21:22.893 12:05:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:22.893 12:05:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:22.893 12:05:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:22.893 12:05:28 -- common/autotest_common.sh@899 -- # local i 00:21:22.893 12:05:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:22.893 12:05:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:22.893 12:05:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.152 12:05:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:23.720 [ 00:21:23.720 { 00:21:23.720 "name": "BaseBdev3", 00:21:23.720 "aliases": [ 00:21:23.720 "c3dd8e1e-3030-4ba7-82f7-8152f9e25a95" 00:21:23.720 ], 00:21:23.720 "product_name": "Malloc disk", 00:21:23.720 "block_size": 512, 00:21:23.720 "num_blocks": 65536, 00:21:23.720 "uuid": "c3dd8e1e-3030-4ba7-82f7-8152f9e25a95", 00:21:23.720 "assigned_rate_limits": { 00:21:23.720 "rw_ios_per_sec": 0, 00:21:23.720 "rw_mbytes_per_sec": 0, 00:21:23.720 "r_mbytes_per_sec": 0, 00:21:23.720 "w_mbytes_per_sec": 0 00:21:23.720 }, 00:21:23.720 "claimed": true, 00:21:23.720 "claim_type": "exclusive_write", 00:21:23.720 "zoned": false, 00:21:23.720 "supported_io_types": { 00:21:23.720 "read": true, 00:21:23.720 "write": true, 00:21:23.720 "unmap": true, 00:21:23.720 "write_zeroes": true, 00:21:23.720 "flush": true, 00:21:23.720 "reset": true, 00:21:23.720 "compare": false, 00:21:23.720 "compare_and_write": false, 00:21:23.720 "abort": true, 00:21:23.720 "nvme_admin": false, 00:21:23.720 "nvme_io": false 00:21:23.720 }, 00:21:23.720 "memory_domains": [ 00:21:23.720 { 00:21:23.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.720 "dma_device_type": 2 00:21:23.720 } 00:21:23.720 ], 00:21:23.720 "driver_specific": {} 00:21:23.720 } 00:21:23.720 ] 00:21:23.720 12:05:28 -- common/autotest_common.sh@905 -- # return 0 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.720 12:05:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.990 12:05:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.990 "name": "Existed_Raid", 00:21:23.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.990 "strip_size_kb": 64, 00:21:23.990 "state": "configuring", 00:21:23.990 "raid_level": "concat", 00:21:23.990 "superblock": false, 00:21:23.990 "num_base_bdevs": 4, 00:21:23.990 "num_base_bdevs_discovered": 3, 00:21:23.990 "num_base_bdevs_operational": 4, 00:21:23.990 "base_bdevs_list": [ 00:21:23.990 { 00:21:23.990 "name": "BaseBdev1", 00:21:23.990 "uuid": "260ab773-93c8-4f04-a1e6-7bef0ca405a0", 00:21:23.990 "is_configured": true, 00:21:23.990 "data_offset": 0, 00:21:23.990 "data_size": 65536 00:21:23.990 }, 00:21:23.990 { 00:21:23.990 "name": "BaseBdev2", 00:21:23.990 "uuid": "3c622f2e-3bb0-4917-9c1c-43d206178962", 00:21:23.990 "is_configured": true, 00:21:23.990 "data_offset": 0, 00:21:23.990 "data_size": 65536 00:21:23.990 }, 00:21:23.990 { 00:21:23.990 "name": "BaseBdev3", 00:21:23.990 "uuid": "c3dd8e1e-3030-4ba7-82f7-8152f9e25a95", 00:21:23.990 "is_configured": true, 00:21:23.990 "data_offset": 0, 00:21:23.990 "data_size": 65536 00:21:23.990 }, 00:21:23.990 { 00:21:23.990 "name": "BaseBdev4", 00:21:23.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.990 "is_configured": false, 00:21:23.990 "data_offset": 0, 00:21:23.990 "data_size": 0 00:21:23.990 } 00:21:23.990 ] 00:21:23.990 }' 00:21:23.990 12:05:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.990 12:05:29 -- common/autotest_common.sh@10 -- # set +x 00:21:24.566 12:05:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:24.824 [2024-11-29 12:05:30.251119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:24.824 [2024-11-29 12:05:30.251184] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:21:24.824 [2024-11-29 12:05:30.251195] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:24.824 [2024-11-29 12:05:30.251352] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:21:24.824 [2024-11-29 12:05:30.251787] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:21:24.824 [2024-11-29 12:05:30.251804] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:21:24.824 [2024-11-29 12:05:30.252111] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.824 BaseBdev4 00:21:24.824 12:05:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:24.824 12:05:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:24.824 12:05:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:24.824 12:05:30 -- common/autotest_common.sh@899 -- # local i 00:21:24.824 12:05:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:24.824 12:05:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:24.824 12:05:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.082 12:05:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:25.648 [ 00:21:25.648 { 00:21:25.648 "name": "BaseBdev4", 00:21:25.648 "aliases": [ 00:21:25.648 "5f3a97e6-6650-4815-98f6-403c747d6070" 00:21:25.648 ], 00:21:25.648 "product_name": "Malloc disk", 00:21:25.648 "block_size": 512, 00:21:25.648 "num_blocks": 65536, 00:21:25.648 "uuid": "5f3a97e6-6650-4815-98f6-403c747d6070", 00:21:25.648 "assigned_rate_limits": { 00:21:25.648 "rw_ios_per_sec": 0, 00:21:25.648 "rw_mbytes_per_sec": 0, 00:21:25.648 "r_mbytes_per_sec": 0, 00:21:25.648 "w_mbytes_per_sec": 0 00:21:25.648 }, 00:21:25.648 "claimed": true, 00:21:25.648 "claim_type": "exclusive_write", 00:21:25.648 "zoned": false, 00:21:25.648 "supported_io_types": { 00:21:25.648 "read": true, 00:21:25.648 "write": true, 00:21:25.648 "unmap": true, 00:21:25.648 "write_zeroes": true, 00:21:25.648 "flush": true, 00:21:25.648 "reset": true, 00:21:25.648 "compare": false, 00:21:25.648 "compare_and_write": false, 00:21:25.648 "abort": true, 00:21:25.648 "nvme_admin": false, 00:21:25.648 "nvme_io": false 00:21:25.648 }, 00:21:25.648 "memory_domains": [ 00:21:25.648 { 00:21:25.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.648 "dma_device_type": 2 00:21:25.648 } 00:21:25.648 ], 00:21:25.648 "driver_specific": {} 00:21:25.648 } 00:21:25.648 ] 00:21:25.649 12:05:30 -- common/autotest_common.sh@905 -- # return 0 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.649 12:05:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.907 12:05:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.907 "name": "Existed_Raid", 00:21:25.907 "uuid": "dce626e2-d723-4f67-a92d-1a5921810511", 00:21:25.907 "strip_size_kb": 64, 00:21:25.907 "state": "online", 00:21:25.907 "raid_level": "concat", 00:21:25.907 "superblock": false, 00:21:25.907 "num_base_bdevs": 4, 00:21:25.907 "num_base_bdevs_discovered": 4, 00:21:25.908 "num_base_bdevs_operational": 4, 00:21:25.908 "base_bdevs_list": [ 00:21:25.908 { 00:21:25.908 "name": "BaseBdev1", 00:21:25.908 "uuid": "260ab773-93c8-4f04-a1e6-7bef0ca405a0", 00:21:25.908 "is_configured": true, 00:21:25.908 "data_offset": 0, 00:21:25.908 "data_size": 65536 00:21:25.908 }, 00:21:25.908 { 00:21:25.908 "name": "BaseBdev2", 00:21:25.908 "uuid": "3c622f2e-3bb0-4917-9c1c-43d206178962", 00:21:25.908 "is_configured": true, 00:21:25.908 "data_offset": 0, 00:21:25.908 "data_size": 65536 00:21:25.908 }, 00:21:25.908 { 00:21:25.908 "name": "BaseBdev3", 00:21:25.908 "uuid": "c3dd8e1e-3030-4ba7-82f7-8152f9e25a95", 00:21:25.908 "is_configured": true, 00:21:25.908 "data_offset": 0, 00:21:25.908 "data_size": 65536 00:21:25.908 }, 00:21:25.908 { 00:21:25.908 "name": "BaseBdev4", 00:21:25.908 "uuid": "5f3a97e6-6650-4815-98f6-403c747d6070", 00:21:25.908 "is_configured": true, 00:21:25.908 "data_offset": 0, 00:21:25.908 "data_size": 65536 00:21:25.908 } 00:21:25.908 ] 00:21:25.908 }' 00:21:25.908 12:05:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.908 12:05:31 -- common/autotest_common.sh@10 -- # set +x 00:21:26.493 12:05:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:26.751 [2024-11-29 12:05:32.187777] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.751 [2024-11-29 12:05:32.187849] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:26.751 [2024-11-29 12:05:32.187953] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.751 12:05:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.010 12:05:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.010 "name": "Existed_Raid", 00:21:27.010 "uuid": "dce626e2-d723-4f67-a92d-1a5921810511", 00:21:27.010 "strip_size_kb": 64, 00:21:27.010 "state": "offline", 00:21:27.010 "raid_level": "concat", 00:21:27.010 "superblock": false, 00:21:27.010 "num_base_bdevs": 4, 00:21:27.010 "num_base_bdevs_discovered": 3, 00:21:27.010 "num_base_bdevs_operational": 3, 00:21:27.010 "base_bdevs_list": [ 00:21:27.010 { 00:21:27.010 "name": null, 00:21:27.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.010 "is_configured": false, 00:21:27.010 "data_offset": 0, 00:21:27.010 "data_size": 65536 00:21:27.010 }, 00:21:27.010 { 00:21:27.010 "name": "BaseBdev2", 00:21:27.010 "uuid": "3c622f2e-3bb0-4917-9c1c-43d206178962", 00:21:27.010 "is_configured": true, 00:21:27.010 "data_offset": 0, 00:21:27.010 "data_size": 65536 00:21:27.010 }, 00:21:27.010 { 00:21:27.010 "name": "BaseBdev3", 00:21:27.010 "uuid": "c3dd8e1e-3030-4ba7-82f7-8152f9e25a95", 00:21:27.010 "is_configured": true, 00:21:27.010 "data_offset": 0, 00:21:27.010 "data_size": 65536 00:21:27.010 }, 00:21:27.010 { 00:21:27.010 "name": "BaseBdev4", 00:21:27.010 "uuid": "5f3a97e6-6650-4815-98f6-403c747d6070", 00:21:27.010 "is_configured": true, 00:21:27.010 "data_offset": 0, 00:21:27.010 "data_size": 65536 00:21:27.010 } 00:21:27.010 ] 00:21:27.010 }' 00:21:27.010 12:05:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.010 12:05:32 -- common/autotest_common.sh@10 -- # set +x 00:21:27.945 12:05:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:27.945 12:05:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:27.945 12:05:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:27.945 12:05:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.203 12:05:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:28.203 12:05:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:28.203 12:05:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:28.461 [2024-11-29 12:05:33.731875] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:28.461 12:05:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:28.461 12:05:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:28.461 12:05:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.461 12:05:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:28.722 12:05:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:28.722 12:05:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:28.722 12:05:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:28.979 [2024-11-29 12:05:34.259806] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:28.979 12:05:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:28.979 12:05:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:28.979 12:05:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.979 12:05:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:29.238 12:05:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:29.238 12:05:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:29.238 12:05:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:29.496 [2024-11-29 12:05:34.820370] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:29.496 [2024-11-29 12:05:34.820489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:21:29.496 12:05:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:29.496 12:05:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:29.496 12:05:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.496 12:05:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:29.753 12:05:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:29.753 12:05:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:29.753 12:05:35 -- bdev/bdev_raid.sh@287 -- # killprocess 130994 00:21:29.753 12:05:35 -- common/autotest_common.sh@936 -- # '[' -z 130994 ']' 00:21:29.753 12:05:35 -- common/autotest_common.sh@940 -- # kill -0 130994 00:21:29.753 12:05:35 -- common/autotest_common.sh@941 -- # uname 00:21:29.753 12:05:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.753 12:05:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130994 00:21:29.753 killing process with pid 130994 00:21:29.753 12:05:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:29.753 12:05:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:29.753 12:05:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130994' 00:21:29.753 12:05:35 -- common/autotest_common.sh@955 -- # kill 130994 00:21:29.753 12:05:35 -- common/autotest_common.sh@960 -- # wait 130994 00:21:29.753 [2024-11-29 12:05:35.161455] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.753 [2024-11-29 12:05:35.161604] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.012 ************************************ 00:21:30.012 END TEST raid_state_function_test 00:21:30.012 ************************************ 00:21:30.012 12:05:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:30.012 00:21:30.012 real 0m15.439s 00:21:30.012 user 0m28.481s 00:21:30.012 sys 0m1.936s 00:21:30.012 12:05:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:30.012 12:05:35 -- common/autotest_common.sh@10 -- # set +x 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:30.270 12:05:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:30.270 12:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:30.270 12:05:35 -- common/autotest_common.sh@10 -- # set +x 00:21:30.270 ************************************ 00:21:30.270 START TEST raid_state_function_test_sb 00:21:30.270 ************************************ 00:21:30.270 12:05:35 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=131446 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131446' 00:21:30.270 Process raid pid: 131446 00:21:30.270 12:05:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131446 /var/tmp/spdk-raid.sock 00:21:30.270 12:05:35 -- common/autotest_common.sh@829 -- # '[' -z 131446 ']' 00:21:30.270 12:05:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:30.270 12:05:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:30.270 12:05:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:30.270 12:05:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.271 12:05:35 -- common/autotest_common.sh@10 -- # set +x 00:21:30.271 [2024-11-29 12:05:35.619388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:30.271 [2024-11-29 12:05:35.619678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.271 [2024-11-29 12:05:35.775054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.529 [2024-11-29 12:05:35.878622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.529 [2024-11-29 12:05:35.936872] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.097 12:05:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.097 12:05:36 -- common/autotest_common.sh@862 -- # return 0 00:21:31.097 12:05:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:31.356 [2024-11-29 12:05:36.808875] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:31.356 [2024-11-29 12:05:36.809016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:31.356 [2024-11-29 12:05:36.809032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.356 [2024-11-29 12:05:36.809053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.356 [2024-11-29 12:05:36.809061] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:31.356 [2024-11-29 12:05:36.809114] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:31.356 [2024-11-29 12:05:36.809124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:31.356 [2024-11-29 12:05:36.809152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.356 12:05:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.615 12:05:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.615 "name": "Existed_Raid", 00:21:31.615 "uuid": "69e9fe2d-c7db-49cd-84d5-c6f35e0ac1ee", 00:21:31.615 "strip_size_kb": 64, 00:21:31.615 "state": "configuring", 00:21:31.615 "raid_level": "concat", 00:21:31.615 "superblock": true, 00:21:31.615 "num_base_bdevs": 4, 00:21:31.615 "num_base_bdevs_discovered": 0, 00:21:31.615 "num_base_bdevs_operational": 4, 00:21:31.615 "base_bdevs_list": [ 00:21:31.615 { 00:21:31.615 "name": "BaseBdev1", 00:21:31.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.615 "is_configured": false, 00:21:31.615 "data_offset": 0, 00:21:31.615 "data_size": 0 00:21:31.615 }, 00:21:31.615 { 00:21:31.615 "name": "BaseBdev2", 00:21:31.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.615 "is_configured": false, 00:21:31.615 "data_offset": 0, 00:21:31.615 "data_size": 0 00:21:31.615 }, 00:21:31.615 { 00:21:31.615 "name": "BaseBdev3", 00:21:31.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.615 "is_configured": false, 00:21:31.615 "data_offset": 0, 00:21:31.615 "data_size": 0 00:21:31.615 }, 00:21:31.615 { 00:21:31.615 "name": "BaseBdev4", 00:21:31.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.615 "is_configured": false, 00:21:31.615 "data_offset": 0, 00:21:31.615 "data_size": 0 00:21:31.615 } 00:21:31.615 ] 00:21:31.615 }' 00:21:31.615 12:05:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.615 12:05:37 -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 12:05:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:32.809 [2024-11-29 12:05:38.076974] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:32.809 [2024-11-29 12:05:38.077048] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:21:32.809 12:05:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:32.809 [2024-11-29 12:05:38.305122] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.809 [2024-11-29 12:05:38.305232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.809 [2024-11-29 12:05:38.305246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:32.809 [2024-11-29 12:05:38.305275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:32.809 [2024-11-29 12:05:38.305284] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:32.809 [2024-11-29 12:05:38.305303] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:32.809 [2024-11-29 12:05:38.305310] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:32.809 [2024-11-29 12:05:38.305336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:33.068 12:05:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:33.327 [2024-11-29 12:05:38.596857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.327 BaseBdev1 00:21:33.327 12:05:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:33.327 12:05:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:33.327 12:05:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:33.327 12:05:38 -- common/autotest_common.sh@899 -- # local i 00:21:33.327 12:05:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:33.327 12:05:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:33.327 12:05:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:33.586 12:05:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.586 [ 00:21:33.586 { 00:21:33.586 "name": "BaseBdev1", 00:21:33.586 "aliases": [ 00:21:33.586 "3566055a-f7cd-4d76-b2b6-94517d114b1b" 00:21:33.586 ], 00:21:33.586 "product_name": "Malloc disk", 00:21:33.586 "block_size": 512, 00:21:33.586 "num_blocks": 65536, 00:21:33.586 "uuid": "3566055a-f7cd-4d76-b2b6-94517d114b1b", 00:21:33.586 "assigned_rate_limits": { 00:21:33.586 "rw_ios_per_sec": 0, 00:21:33.586 "rw_mbytes_per_sec": 0, 00:21:33.586 "r_mbytes_per_sec": 0, 00:21:33.586 "w_mbytes_per_sec": 0 00:21:33.586 }, 00:21:33.586 "claimed": true, 00:21:33.586 "claim_type": "exclusive_write", 00:21:33.586 "zoned": false, 00:21:33.586 "supported_io_types": { 00:21:33.586 "read": true, 00:21:33.586 "write": true, 00:21:33.586 "unmap": true, 00:21:33.586 "write_zeroes": true, 00:21:33.586 "flush": true, 00:21:33.586 "reset": true, 00:21:33.586 "compare": false, 00:21:33.586 "compare_and_write": false, 00:21:33.586 "abort": true, 00:21:33.586 "nvme_admin": false, 00:21:33.586 "nvme_io": false 00:21:33.586 }, 00:21:33.586 "memory_domains": [ 00:21:33.586 { 00:21:33.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.586 "dma_device_type": 2 00:21:33.586 } 00:21:33.586 ], 00:21:33.586 "driver_specific": {} 00:21:33.586 } 00:21:33.586 ] 00:21:33.844 12:05:39 -- common/autotest_common.sh@905 -- # return 0 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:33.844 "name": "Existed_Raid", 00:21:33.844 "uuid": "89963be8-82d2-4c14-9917-f137d769b86c", 00:21:33.844 "strip_size_kb": 64, 00:21:33.844 "state": "configuring", 00:21:33.844 "raid_level": "concat", 00:21:33.844 "superblock": true, 00:21:33.844 "num_base_bdevs": 4, 00:21:33.844 "num_base_bdevs_discovered": 1, 00:21:33.844 "num_base_bdevs_operational": 4, 00:21:33.844 "base_bdevs_list": [ 00:21:33.844 { 00:21:33.844 "name": "BaseBdev1", 00:21:33.844 "uuid": "3566055a-f7cd-4d76-b2b6-94517d114b1b", 00:21:33.844 "is_configured": true, 00:21:33.844 "data_offset": 2048, 00:21:33.844 "data_size": 63488 00:21:33.844 }, 00:21:33.844 { 00:21:33.844 "name": "BaseBdev2", 00:21:33.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.844 "is_configured": false, 00:21:33.844 "data_offset": 0, 00:21:33.844 "data_size": 0 00:21:33.844 }, 00:21:33.844 { 00:21:33.844 "name": "BaseBdev3", 00:21:33.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.844 "is_configured": false, 00:21:33.844 "data_offset": 0, 00:21:33.844 "data_size": 0 00:21:33.844 }, 00:21:33.844 { 00:21:33.844 "name": "BaseBdev4", 00:21:33.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.844 "is_configured": false, 00:21:33.844 "data_offset": 0, 00:21:33.844 "data_size": 0 00:21:33.844 } 00:21:33.844 ] 00:21:33.844 }' 00:21:33.844 12:05:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:33.844 12:05:39 -- common/autotest_common.sh@10 -- # set +x 00:21:34.780 12:05:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:34.780 [2024-11-29 12:05:40.193302] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:34.780 [2024-11-29 12:05:40.193405] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:34.780 12:05:40 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:34.780 12:05:40 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:35.039 12:05:40 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:35.298 BaseBdev1 00:21:35.298 12:05:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:35.298 12:05:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:35.298 12:05:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.298 12:05:40 -- common/autotest_common.sh@899 -- # local i 00:21:35.298 12:05:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.298 12:05:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.298 12:05:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.556 12:05:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:35.814 [ 00:21:35.814 { 00:21:35.814 "name": "BaseBdev1", 00:21:35.814 "aliases": [ 00:21:35.814 "85887a90-fd43-4e62-a7d9-a3da1388f4c5" 00:21:35.814 ], 00:21:35.814 "product_name": "Malloc disk", 00:21:35.814 "block_size": 512, 00:21:35.814 "num_blocks": 65536, 00:21:35.814 "uuid": "85887a90-fd43-4e62-a7d9-a3da1388f4c5", 00:21:35.814 "assigned_rate_limits": { 00:21:35.814 "rw_ios_per_sec": 0, 00:21:35.814 "rw_mbytes_per_sec": 0, 00:21:35.814 "r_mbytes_per_sec": 0, 00:21:35.814 "w_mbytes_per_sec": 0 00:21:35.814 }, 00:21:35.814 "claimed": false, 00:21:35.814 "zoned": false, 00:21:35.814 "supported_io_types": { 00:21:35.814 "read": true, 00:21:35.814 "write": true, 00:21:35.814 "unmap": true, 00:21:35.814 "write_zeroes": true, 00:21:35.814 "flush": true, 00:21:35.814 "reset": true, 00:21:35.814 "compare": false, 00:21:35.814 "compare_and_write": false, 00:21:35.814 "abort": true, 00:21:35.814 "nvme_admin": false, 00:21:35.814 "nvme_io": false 00:21:35.814 }, 00:21:35.814 "memory_domains": [ 00:21:35.814 { 00:21:35.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.814 "dma_device_type": 2 00:21:35.814 } 00:21:35.814 ], 00:21:35.814 "driver_specific": {} 00:21:35.814 } 00:21:35.814 ] 00:21:35.814 12:05:41 -- common/autotest_common.sh@905 -- # return 0 00:21:35.814 12:05:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:36.073 [2024-11-29 12:05:41.510745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.073 [2024-11-29 12:05:41.513023] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.073 [2024-11-29 12:05:41.513110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.073 [2024-11-29 12:05:41.513124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.073 [2024-11-29 12:05:41.513151] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.073 [2024-11-29 12:05:41.513160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.073 [2024-11-29 12:05:41.513179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.073 12:05:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.332 12:05:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:36.332 "name": "Existed_Raid", 00:21:36.332 "uuid": "2bb7c73c-8e03-477e-a286-d15f8e9569fe", 00:21:36.332 "strip_size_kb": 64, 00:21:36.332 "state": "configuring", 00:21:36.332 "raid_level": "concat", 00:21:36.332 "superblock": true, 00:21:36.332 "num_base_bdevs": 4, 00:21:36.332 "num_base_bdevs_discovered": 1, 00:21:36.332 "num_base_bdevs_operational": 4, 00:21:36.332 "base_bdevs_list": [ 00:21:36.332 { 00:21:36.332 "name": "BaseBdev1", 00:21:36.332 "uuid": "85887a90-fd43-4e62-a7d9-a3da1388f4c5", 00:21:36.332 "is_configured": true, 00:21:36.332 "data_offset": 2048, 00:21:36.332 "data_size": 63488 00:21:36.332 }, 00:21:36.332 { 00:21:36.332 "name": "BaseBdev2", 00:21:36.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.332 "is_configured": false, 00:21:36.332 "data_offset": 0, 00:21:36.332 "data_size": 0 00:21:36.332 }, 00:21:36.332 { 00:21:36.332 "name": "BaseBdev3", 00:21:36.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.332 "is_configured": false, 00:21:36.332 "data_offset": 0, 00:21:36.332 "data_size": 0 00:21:36.332 }, 00:21:36.332 { 00:21:36.332 "name": "BaseBdev4", 00:21:36.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.332 "is_configured": false, 00:21:36.332 "data_offset": 0, 00:21:36.332 "data_size": 0 00:21:36.332 } 00:21:36.332 ] 00:21:36.332 }' 00:21:36.332 12:05:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:36.332 12:05:41 -- common/autotest_common.sh@10 -- # set +x 00:21:37.266 12:05:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:37.266 [2024-11-29 12:05:42.750653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.266 BaseBdev2 00:21:37.266 12:05:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:37.266 12:05:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:37.266 12:05:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:37.266 12:05:42 -- common/autotest_common.sh@899 -- # local i 00:21:37.266 12:05:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:37.266 12:05:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:37.266 12:05:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:37.524 12:05:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:37.782 [ 00:21:37.782 { 00:21:37.782 "name": "BaseBdev2", 00:21:37.782 "aliases": [ 00:21:37.782 "d4ee180f-be8d-4a73-8f72-c9d1086bcf4e" 00:21:37.782 ], 00:21:37.782 "product_name": "Malloc disk", 00:21:37.782 "block_size": 512, 00:21:37.782 "num_blocks": 65536, 00:21:37.782 "uuid": "d4ee180f-be8d-4a73-8f72-c9d1086bcf4e", 00:21:37.782 "assigned_rate_limits": { 00:21:37.782 "rw_ios_per_sec": 0, 00:21:37.782 "rw_mbytes_per_sec": 0, 00:21:37.782 "r_mbytes_per_sec": 0, 00:21:37.782 "w_mbytes_per_sec": 0 00:21:37.782 }, 00:21:37.782 "claimed": true, 00:21:37.782 "claim_type": "exclusive_write", 00:21:37.782 "zoned": false, 00:21:37.782 "supported_io_types": { 00:21:37.782 "read": true, 00:21:37.782 "write": true, 00:21:37.782 "unmap": true, 00:21:37.782 "write_zeroes": true, 00:21:37.782 "flush": true, 00:21:37.782 "reset": true, 00:21:37.782 "compare": false, 00:21:37.782 "compare_and_write": false, 00:21:37.782 "abort": true, 00:21:37.782 "nvme_admin": false, 00:21:37.782 "nvme_io": false 00:21:37.782 }, 00:21:37.782 "memory_domains": [ 00:21:37.782 { 00:21:37.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.782 "dma_device_type": 2 00:21:37.782 } 00:21:37.782 ], 00:21:37.782 "driver_specific": {} 00:21:37.782 } 00:21:37.782 ] 00:21:37.782 12:05:43 -- common/autotest_common.sh@905 -- # return 0 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.782 12:05:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.040 12:05:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.040 "name": "Existed_Raid", 00:21:38.040 "uuid": "2bb7c73c-8e03-477e-a286-d15f8e9569fe", 00:21:38.040 "strip_size_kb": 64, 00:21:38.040 "state": "configuring", 00:21:38.040 "raid_level": "concat", 00:21:38.040 "superblock": true, 00:21:38.040 "num_base_bdevs": 4, 00:21:38.040 "num_base_bdevs_discovered": 2, 00:21:38.040 "num_base_bdevs_operational": 4, 00:21:38.040 "base_bdevs_list": [ 00:21:38.040 { 00:21:38.040 "name": "BaseBdev1", 00:21:38.040 "uuid": "85887a90-fd43-4e62-a7d9-a3da1388f4c5", 00:21:38.040 "is_configured": true, 00:21:38.040 "data_offset": 2048, 00:21:38.040 "data_size": 63488 00:21:38.040 }, 00:21:38.040 { 00:21:38.040 "name": "BaseBdev2", 00:21:38.040 "uuid": "d4ee180f-be8d-4a73-8f72-c9d1086bcf4e", 00:21:38.040 "is_configured": true, 00:21:38.040 "data_offset": 2048, 00:21:38.040 "data_size": 63488 00:21:38.040 }, 00:21:38.040 { 00:21:38.040 "name": "BaseBdev3", 00:21:38.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.040 "is_configured": false, 00:21:38.040 "data_offset": 0, 00:21:38.040 "data_size": 0 00:21:38.040 }, 00:21:38.040 { 00:21:38.040 "name": "BaseBdev4", 00:21:38.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.040 "is_configured": false, 00:21:38.040 "data_offset": 0, 00:21:38.040 "data_size": 0 00:21:38.040 } 00:21:38.040 ] 00:21:38.040 }' 00:21:38.040 12:05:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.040 12:05:43 -- common/autotest_common.sh@10 -- # set +x 00:21:38.972 12:05:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:38.972 [2024-11-29 12:05:44.448178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:38.972 BaseBdev3 00:21:38.972 12:05:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:38.972 12:05:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:38.972 12:05:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:38.972 12:05:44 -- common/autotest_common.sh@899 -- # local i 00:21:38.972 12:05:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:38.972 12:05:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:38.972 12:05:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.229 12:05:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.794 [ 00:21:39.794 { 00:21:39.794 "name": "BaseBdev3", 00:21:39.794 "aliases": [ 00:21:39.794 "55c3e157-2989-44b1-bdad-fb9ff6ead57e" 00:21:39.794 ], 00:21:39.794 "product_name": "Malloc disk", 00:21:39.794 "block_size": 512, 00:21:39.794 "num_blocks": 65536, 00:21:39.794 "uuid": "55c3e157-2989-44b1-bdad-fb9ff6ead57e", 00:21:39.794 "assigned_rate_limits": { 00:21:39.794 "rw_ios_per_sec": 0, 00:21:39.794 "rw_mbytes_per_sec": 0, 00:21:39.794 "r_mbytes_per_sec": 0, 00:21:39.794 "w_mbytes_per_sec": 0 00:21:39.794 }, 00:21:39.794 "claimed": true, 00:21:39.794 "claim_type": "exclusive_write", 00:21:39.794 "zoned": false, 00:21:39.794 "supported_io_types": { 00:21:39.794 "read": true, 00:21:39.794 "write": true, 00:21:39.794 "unmap": true, 00:21:39.794 "write_zeroes": true, 00:21:39.794 "flush": true, 00:21:39.794 "reset": true, 00:21:39.794 "compare": false, 00:21:39.794 "compare_and_write": false, 00:21:39.794 "abort": true, 00:21:39.794 "nvme_admin": false, 00:21:39.794 "nvme_io": false 00:21:39.794 }, 00:21:39.794 "memory_domains": [ 00:21:39.794 { 00:21:39.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.794 "dma_device_type": 2 00:21:39.794 } 00:21:39.794 ], 00:21:39.794 "driver_specific": {} 00:21:39.794 } 00:21:39.794 ] 00:21:39.794 12:05:45 -- common/autotest_common.sh@905 -- # return 0 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.794 12:05:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.053 12:05:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.053 "name": "Existed_Raid", 00:21:40.053 "uuid": "2bb7c73c-8e03-477e-a286-d15f8e9569fe", 00:21:40.053 "strip_size_kb": 64, 00:21:40.053 "state": "configuring", 00:21:40.053 "raid_level": "concat", 00:21:40.053 "superblock": true, 00:21:40.053 "num_base_bdevs": 4, 00:21:40.053 "num_base_bdevs_discovered": 3, 00:21:40.053 "num_base_bdevs_operational": 4, 00:21:40.053 "base_bdevs_list": [ 00:21:40.053 { 00:21:40.053 "name": "BaseBdev1", 00:21:40.053 "uuid": "85887a90-fd43-4e62-a7d9-a3da1388f4c5", 00:21:40.053 "is_configured": true, 00:21:40.053 "data_offset": 2048, 00:21:40.053 "data_size": 63488 00:21:40.053 }, 00:21:40.053 { 00:21:40.053 "name": "BaseBdev2", 00:21:40.053 "uuid": "d4ee180f-be8d-4a73-8f72-c9d1086bcf4e", 00:21:40.053 "is_configured": true, 00:21:40.053 "data_offset": 2048, 00:21:40.053 "data_size": 63488 00:21:40.053 }, 00:21:40.053 { 00:21:40.053 "name": "BaseBdev3", 00:21:40.053 "uuid": "55c3e157-2989-44b1-bdad-fb9ff6ead57e", 00:21:40.053 "is_configured": true, 00:21:40.053 "data_offset": 2048, 00:21:40.053 "data_size": 63488 00:21:40.053 }, 00:21:40.053 { 00:21:40.053 "name": "BaseBdev4", 00:21:40.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.053 "is_configured": false, 00:21:40.053 "data_offset": 0, 00:21:40.053 "data_size": 0 00:21:40.053 } 00:21:40.053 ] 00:21:40.053 }' 00:21:40.053 12:05:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.053 12:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:40.621 12:05:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:40.953 [2024-11-29 12:05:46.234047] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:40.953 [2024-11-29 12:05:46.234339] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:21:40.953 [2024-11-29 12:05:46.234373] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:40.953 [2024-11-29 12:05:46.234500] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:21:40.953 [2024-11-29 12:05:46.234958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:21:40.953 [2024-11-29 12:05:46.234983] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:21:40.953 [2024-11-29 12:05:46.235161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.953 BaseBdev4 00:21:40.953 12:05:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:40.953 12:05:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:40.953 12:05:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:40.953 12:05:46 -- common/autotest_common.sh@899 -- # local i 00:21:40.953 12:05:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:40.953 12:05:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:40.953 12:05:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:41.228 12:05:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:41.486 [ 00:21:41.486 { 00:21:41.486 "name": "BaseBdev4", 00:21:41.486 "aliases": [ 00:21:41.486 "8599ce01-8532-4826-8598-c1f3b2e13bea" 00:21:41.486 ], 00:21:41.486 "product_name": "Malloc disk", 00:21:41.486 "block_size": 512, 00:21:41.486 "num_blocks": 65536, 00:21:41.486 "uuid": "8599ce01-8532-4826-8598-c1f3b2e13bea", 00:21:41.486 "assigned_rate_limits": { 00:21:41.486 "rw_ios_per_sec": 0, 00:21:41.486 "rw_mbytes_per_sec": 0, 00:21:41.486 "r_mbytes_per_sec": 0, 00:21:41.486 "w_mbytes_per_sec": 0 00:21:41.486 }, 00:21:41.486 "claimed": true, 00:21:41.486 "claim_type": "exclusive_write", 00:21:41.486 "zoned": false, 00:21:41.486 "supported_io_types": { 00:21:41.486 "read": true, 00:21:41.486 "write": true, 00:21:41.486 "unmap": true, 00:21:41.486 "write_zeroes": true, 00:21:41.486 "flush": true, 00:21:41.486 "reset": true, 00:21:41.486 "compare": false, 00:21:41.486 "compare_and_write": false, 00:21:41.486 "abort": true, 00:21:41.486 "nvme_admin": false, 00:21:41.486 "nvme_io": false 00:21:41.486 }, 00:21:41.486 "memory_domains": [ 00:21:41.486 { 00:21:41.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.486 "dma_device_type": 2 00:21:41.486 } 00:21:41.486 ], 00:21:41.486 "driver_specific": {} 00:21:41.486 } 00:21:41.486 ] 00:21:41.486 12:05:46 -- common/autotest_common.sh@905 -- # return 0 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.486 12:05:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.744 12:05:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.744 "name": "Existed_Raid", 00:21:41.744 "uuid": "2bb7c73c-8e03-477e-a286-d15f8e9569fe", 00:21:41.744 "strip_size_kb": 64, 00:21:41.744 "state": "online", 00:21:41.744 "raid_level": "concat", 00:21:41.744 "superblock": true, 00:21:41.744 "num_base_bdevs": 4, 00:21:41.744 "num_base_bdevs_discovered": 4, 00:21:41.744 "num_base_bdevs_operational": 4, 00:21:41.744 "base_bdevs_list": [ 00:21:41.744 { 00:21:41.744 "name": "BaseBdev1", 00:21:41.744 "uuid": "85887a90-fd43-4e62-a7d9-a3da1388f4c5", 00:21:41.744 "is_configured": true, 00:21:41.744 "data_offset": 2048, 00:21:41.744 "data_size": 63488 00:21:41.744 }, 00:21:41.744 { 00:21:41.744 "name": "BaseBdev2", 00:21:41.744 "uuid": "d4ee180f-be8d-4a73-8f72-c9d1086bcf4e", 00:21:41.744 "is_configured": true, 00:21:41.744 "data_offset": 2048, 00:21:41.744 "data_size": 63488 00:21:41.744 }, 00:21:41.744 { 00:21:41.744 "name": "BaseBdev3", 00:21:41.744 "uuid": "55c3e157-2989-44b1-bdad-fb9ff6ead57e", 00:21:41.744 "is_configured": true, 00:21:41.744 "data_offset": 2048, 00:21:41.744 "data_size": 63488 00:21:41.744 }, 00:21:41.744 { 00:21:41.744 "name": "BaseBdev4", 00:21:41.744 "uuid": "8599ce01-8532-4826-8598-c1f3b2e13bea", 00:21:41.744 "is_configured": true, 00:21:41.744 "data_offset": 2048, 00:21:41.744 "data_size": 63488 00:21:41.744 } 00:21:41.744 ] 00:21:41.744 }' 00:21:41.744 12:05:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.744 12:05:47 -- common/autotest_common.sh@10 -- # set +x 00:21:42.311 12:05:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:42.569 [2024-11-29 12:05:48.063241] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.569 [2024-11-29 12:05:48.063316] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.569 [2024-11-29 12:05:48.063421] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.827 "name": "Existed_Raid", 00:21:42.827 "uuid": "2bb7c73c-8e03-477e-a286-d15f8e9569fe", 00:21:42.827 "strip_size_kb": 64, 00:21:42.827 "state": "offline", 00:21:42.827 "raid_level": "concat", 00:21:42.827 "superblock": true, 00:21:42.827 "num_base_bdevs": 4, 00:21:42.827 "num_base_bdevs_discovered": 3, 00:21:42.827 "num_base_bdevs_operational": 3, 00:21:42.827 "base_bdevs_list": [ 00:21:42.827 { 00:21:42.827 "name": null, 00:21:42.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.827 "is_configured": false, 00:21:42.827 "data_offset": 2048, 00:21:42.827 "data_size": 63488 00:21:42.827 }, 00:21:42.827 { 00:21:42.827 "name": "BaseBdev2", 00:21:42.827 "uuid": "d4ee180f-be8d-4a73-8f72-c9d1086bcf4e", 00:21:42.827 "is_configured": true, 00:21:42.827 "data_offset": 2048, 00:21:42.827 "data_size": 63488 00:21:42.827 }, 00:21:42.827 { 00:21:42.827 "name": "BaseBdev3", 00:21:42.827 "uuid": "55c3e157-2989-44b1-bdad-fb9ff6ead57e", 00:21:42.827 "is_configured": true, 00:21:42.827 "data_offset": 2048, 00:21:42.827 "data_size": 63488 00:21:42.827 }, 00:21:42.827 { 00:21:42.827 "name": "BaseBdev4", 00:21:42.827 "uuid": "8599ce01-8532-4826-8598-c1f3b2e13bea", 00:21:42.827 "is_configured": true, 00:21:42.827 "data_offset": 2048, 00:21:42.827 "data_size": 63488 00:21:42.827 } 00:21:42.827 ] 00:21:42.827 }' 00:21:42.827 12:05:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.827 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:43.762 12:05:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:43.762 12:05:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:43.762 12:05:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:43.762 12:05:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.021 12:05:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:44.021 12:05:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:44.021 12:05:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:44.280 [2024-11-29 12:05:49.552504] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:44.280 12:05:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:44.280 12:05:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:44.280 12:05:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:44.280 12:05:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.538 12:05:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:44.538 12:05:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:44.538 12:05:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:44.538 [2024-11-29 12:05:50.027213] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:44.797 12:05:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:45.365 [2024-11-29 12:05:50.578110] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:45.365 [2024-11-29 12:05:50.578214] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:45.365 12:05:50 -- bdev/bdev_raid.sh@287 -- # killprocess 131446 00:21:45.365 12:05:50 -- common/autotest_common.sh@936 -- # '[' -z 131446 ']' 00:21:45.365 12:05:50 -- common/autotest_common.sh@940 -- # kill -0 131446 00:21:45.365 12:05:50 -- common/autotest_common.sh@941 -- # uname 00:21:45.365 12:05:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.365 12:05:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131446 00:21:45.365 12:05:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.365 killing process with pid 131446 00:21:45.365 12:05:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.365 12:05:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131446' 00:21:45.365 12:05:50 -- common/autotest_common.sh@955 -- # kill 131446 00:21:45.365 12:05:50 -- common/autotest_common.sh@960 -- # wait 131446 00:21:45.365 [2024-11-29 12:05:50.867582] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:45.365 [2024-11-29 12:05:50.867702] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.623 12:05:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:45.623 00:21:45.623 real 0m15.582s 00:21:45.623 user 0m28.721s 00:21:45.623 sys 0m2.045s 00:21:45.623 12:05:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:45.881 12:05:51 -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 ************************************ 00:21:45.881 END TEST raid_state_function_test_sb 00:21:45.881 ************************************ 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:45.881 12:05:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:45.881 12:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:45.881 12:05:51 -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 ************************************ 00:21:45.881 START TEST raid_superblock_test 00:21:45.881 ************************************ 00:21:45.881 12:05:51 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=131904 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131904 /var/tmp/spdk-raid.sock 00:21:45.881 12:05:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:45.881 12:05:51 -- common/autotest_common.sh@829 -- # '[' -z 131904 ']' 00:21:45.881 12:05:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:45.881 12:05:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.881 12:05:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:45.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:45.881 12:05:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.881 12:05:51 -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 [2024-11-29 12:05:51.247951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:45.882 [2024-11-29 12:05:51.248186] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131904 ] 00:21:45.882 [2024-11-29 12:05:51.388240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.140 [2024-11-29 12:05:51.484097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.140 [2024-11-29 12:05:51.538401] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.074 12:05:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.074 12:05:52 -- common/autotest_common.sh@862 -- # return 0 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:47.074 malloc1 00:21:47.074 12:05:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:47.332 [2024-11-29 12:05:52.760120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:47.332 [2024-11-29 12:05:52.760275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.332 [2024-11-29 12:05:52.760331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:47.332 [2024-11-29 12:05:52.760405] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.332 [2024-11-29 12:05:52.763253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.332 [2024-11-29 12:05:52.763335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:47.332 pt1 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:47.332 12:05:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:47.590 malloc2 00:21:47.590 12:05:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:47.849 [2024-11-29 12:05:53.239425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:47.849 [2024-11-29 12:05:53.239537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.849 [2024-11-29 12:05:53.239585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:47.849 [2024-11-29 12:05:53.239636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.849 [2024-11-29 12:05:53.242230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.849 [2024-11-29 12:05:53.242297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:47.849 pt2 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:47.849 12:05:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:48.107 malloc3 00:21:48.107 12:05:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:48.365 [2024-11-29 12:05:53.851816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:48.365 [2024-11-29 12:05:53.851936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.365 [2024-11-29 12:05:53.851986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:48.365 [2024-11-29 12:05:53.852035] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.365 [2024-11-29 12:05:53.854742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.365 [2024-11-29 12:05:53.854818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:48.365 pt3 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:48.365 12:05:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:48.623 malloc4 00:21:48.623 12:05:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:48.882 [2024-11-29 12:05:54.363183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:48.882 [2024-11-29 12:05:54.363343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.882 [2024-11-29 12:05:54.363388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:48.882 [2024-11-29 12:05:54.363446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.882 [2024-11-29 12:05:54.366037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.882 [2024-11-29 12:05:54.366103] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:48.882 pt4 00:21:48.882 12:05:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:48.882 12:05:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:48.882 12:05:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:49.140 [2024-11-29 12:05:54.591356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:49.140 [2024-11-29 12:05:54.593678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:49.140 [2024-11-29 12:05:54.593769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:49.140 [2024-11-29 12:05:54.593829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:49.140 [2024-11-29 12:05:54.594108] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:21:49.140 [2024-11-29 12:05:54.594134] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:49.140 [2024-11-29 12:05:54.594365] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:49.140 [2024-11-29 12:05:54.594832] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:21:49.140 [2024-11-29 12:05:54.594858] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:21:49.140 [2024-11-29 12:05:54.595011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.140 12:05:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.399 12:05:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.399 "name": "raid_bdev1", 00:21:49.399 "uuid": "f86928e3-817d-4191-a159-7c604d4521fb", 00:21:49.399 "strip_size_kb": 64, 00:21:49.399 "state": "online", 00:21:49.399 "raid_level": "concat", 00:21:49.399 "superblock": true, 00:21:49.399 "num_base_bdevs": 4, 00:21:49.399 "num_base_bdevs_discovered": 4, 00:21:49.399 "num_base_bdevs_operational": 4, 00:21:49.399 "base_bdevs_list": [ 00:21:49.399 { 00:21:49.399 "name": "pt1", 00:21:49.399 "uuid": "3db420ed-f422-51d0-af1e-967209b3204e", 00:21:49.399 "is_configured": true, 00:21:49.399 "data_offset": 2048, 00:21:49.399 "data_size": 63488 00:21:49.399 }, 00:21:49.399 { 00:21:49.399 "name": "pt2", 00:21:49.399 "uuid": "b4459ca2-48f0-59d1-b089-b7c9cddf2ace", 00:21:49.399 "is_configured": true, 00:21:49.399 "data_offset": 2048, 00:21:49.399 "data_size": 63488 00:21:49.399 }, 00:21:49.399 { 00:21:49.399 "name": "pt3", 00:21:49.399 "uuid": "4da7ef00-a073-54c1-9e7c-13fd55d04bc6", 00:21:49.399 "is_configured": true, 00:21:49.399 "data_offset": 2048, 00:21:49.399 "data_size": 63488 00:21:49.399 }, 00:21:49.399 { 00:21:49.399 "name": "pt4", 00:21:49.399 "uuid": "8f61522e-f915-5a2c-adc9-3a4d6197290a", 00:21:49.399 "is_configured": true, 00:21:49.399 "data_offset": 2048, 00:21:49.399 "data_size": 63488 00:21:49.399 } 00:21:49.399 ] 00:21:49.399 }' 00:21:49.399 12:05:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.399 12:05:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.332 12:05:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:50.332 12:05:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:50.332 [2024-11-29 12:05:55.755797] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.332 12:05:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f86928e3-817d-4191-a159-7c604d4521fb 00:21:50.332 12:05:55 -- bdev/bdev_raid.sh@380 -- # '[' -z f86928e3-817d-4191-a159-7c604d4521fb ']' 00:21:50.332 12:05:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:50.590 [2024-11-29 12:05:56.043594] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:50.590 [2024-11-29 12:05:56.043654] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:50.590 [2024-11-29 12:05:56.043787] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.590 [2024-11-29 12:05:56.043878] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.590 [2024-11-29 12:05:56.043890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:21:50.590 12:05:56 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.590 12:05:56 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:50.848 12:05:56 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:50.848 12:05:56 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:50.848 12:05:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:50.848 12:05:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:51.106 12:05:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:51.106 12:05:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:51.672 12:05:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:51.672 12:05:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:51.672 12:05:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:51.672 12:05:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:51.931 12:05:57 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:51.931 12:05:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:52.190 12:05:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:52.190 12:05:57 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:52.190 12:05:57 -- common/autotest_common.sh@650 -- # local es=0 00:21:52.190 12:05:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:52.190 12:05:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.190 12:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.190 12:05:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.190 12:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.190 12:05:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.190 12:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.190 12:05:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.190 12:05:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:52.190 12:05:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:52.450 [2024-11-29 12:05:57.915889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:52.450 [2024-11-29 12:05:57.918154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:52.450 [2024-11-29 12:05:57.918230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:52.450 [2024-11-29 12:05:57.918277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:52.450 [2024-11-29 12:05:57.918341] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:52.450 [2024-11-29 12:05:57.918462] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:52.450 [2024-11-29 12:05:57.918511] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:52.450 [2024-11-29 12:05:57.918572] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:52.450 [2024-11-29 12:05:57.918621] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.450 [2024-11-29 12:05:57.918633] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:21:52.450 request: 00:21:52.450 { 00:21:52.450 "name": "raid_bdev1", 00:21:52.450 "raid_level": "concat", 00:21:52.450 "base_bdevs": [ 00:21:52.450 "malloc1", 00:21:52.450 "malloc2", 00:21:52.450 "malloc3", 00:21:52.450 "malloc4" 00:21:52.450 ], 00:21:52.450 "superblock": false, 00:21:52.450 "strip_size_kb": 64, 00:21:52.450 "method": "bdev_raid_create", 00:21:52.450 "req_id": 1 00:21:52.450 } 00:21:52.450 Got JSON-RPC error response 00:21:52.450 response: 00:21:52.450 { 00:21:52.450 "code": -17, 00:21:52.450 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:52.450 } 00:21:52.450 12:05:57 -- common/autotest_common.sh@653 -- # es=1 00:21:52.450 12:05:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.450 12:05:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.450 12:05:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.450 12:05:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:52.450 12:05:57 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.708 12:05:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:52.708 12:05:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:52.708 12:05:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:52.967 [2024-11-29 12:05:58.423947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:52.967 [2024-11-29 12:05:58.424076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.967 [2024-11-29 12:05:58.424118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:52.967 [2024-11-29 12:05:58.424148] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.967 [2024-11-29 12:05:58.426756] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.967 [2024-11-29 12:05:58.426841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:52.967 [2024-11-29 12:05:58.426949] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:52.967 [2024-11-29 12:05:58.427033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:52.967 pt1 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.967 12:05:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.533 12:05:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.533 "name": "raid_bdev1", 00:21:53.533 "uuid": "f86928e3-817d-4191-a159-7c604d4521fb", 00:21:53.533 "strip_size_kb": 64, 00:21:53.533 "state": "configuring", 00:21:53.533 "raid_level": "concat", 00:21:53.533 "superblock": true, 00:21:53.533 "num_base_bdevs": 4, 00:21:53.533 "num_base_bdevs_discovered": 1, 00:21:53.533 "num_base_bdevs_operational": 4, 00:21:53.533 "base_bdevs_list": [ 00:21:53.533 { 00:21:53.533 "name": "pt1", 00:21:53.533 "uuid": "3db420ed-f422-51d0-af1e-967209b3204e", 00:21:53.533 "is_configured": true, 00:21:53.533 "data_offset": 2048, 00:21:53.533 "data_size": 63488 00:21:53.533 }, 00:21:53.533 { 00:21:53.533 "name": null, 00:21:53.533 "uuid": "b4459ca2-48f0-59d1-b089-b7c9cddf2ace", 00:21:53.533 "is_configured": false, 00:21:53.533 "data_offset": 2048, 00:21:53.533 "data_size": 63488 00:21:53.533 }, 00:21:53.533 { 00:21:53.533 "name": null, 00:21:53.533 "uuid": "4da7ef00-a073-54c1-9e7c-13fd55d04bc6", 00:21:53.533 "is_configured": false, 00:21:53.533 "data_offset": 2048, 00:21:53.533 "data_size": 63488 00:21:53.533 }, 00:21:53.533 { 00:21:53.533 "name": null, 00:21:53.533 "uuid": "8f61522e-f915-5a2c-adc9-3a4d6197290a", 00:21:53.533 "is_configured": false, 00:21:53.533 "data_offset": 2048, 00:21:53.533 "data_size": 63488 00:21:53.533 } 00:21:53.533 ] 00:21:53.533 }' 00:21:53.533 12:05:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.533 12:05:58 -- common/autotest_common.sh@10 -- # set +x 00:21:54.099 12:05:59 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:54.099 12:05:59 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:54.358 [2024-11-29 12:05:59.706935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:54.358 [2024-11-29 12:05:59.707054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.358 [2024-11-29 12:05:59.707106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:54.358 [2024-11-29 12:05:59.707132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.358 [2024-11-29 12:05:59.707646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.358 [2024-11-29 12:05:59.707699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:54.358 [2024-11-29 12:05:59.707810] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:54.358 [2024-11-29 12:05:59.707839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:54.358 pt2 00:21:54.358 12:05:59 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:54.616 [2024-11-29 12:05:59.994997] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.616 12:06:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.875 12:06:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.875 "name": "raid_bdev1", 00:21:54.875 "uuid": "f86928e3-817d-4191-a159-7c604d4521fb", 00:21:54.875 "strip_size_kb": 64, 00:21:54.875 "state": "configuring", 00:21:54.875 "raid_level": "concat", 00:21:54.875 "superblock": true, 00:21:54.875 "num_base_bdevs": 4, 00:21:54.875 "num_base_bdevs_discovered": 1, 00:21:54.875 "num_base_bdevs_operational": 4, 00:21:54.875 "base_bdevs_list": [ 00:21:54.875 { 00:21:54.875 "name": "pt1", 00:21:54.875 "uuid": "3db420ed-f422-51d0-af1e-967209b3204e", 00:21:54.875 "is_configured": true, 00:21:54.875 "data_offset": 2048, 00:21:54.875 "data_size": 63488 00:21:54.875 }, 00:21:54.875 { 00:21:54.875 "name": null, 00:21:54.875 "uuid": "b4459ca2-48f0-59d1-b089-b7c9cddf2ace", 00:21:54.875 "is_configured": false, 00:21:54.875 "data_offset": 2048, 00:21:54.875 "data_size": 63488 00:21:54.875 }, 00:21:54.875 { 00:21:54.875 "name": null, 00:21:54.875 "uuid": "4da7ef00-a073-54c1-9e7c-13fd55d04bc6", 00:21:54.875 "is_configured": false, 00:21:54.875 "data_offset": 2048, 00:21:54.875 "data_size": 63488 00:21:54.875 }, 00:21:54.875 { 00:21:54.875 "name": null, 00:21:54.875 "uuid": "8f61522e-f915-5a2c-adc9-3a4d6197290a", 00:21:54.875 "is_configured": false, 00:21:54.875 "data_offset": 2048, 00:21:54.875 "data_size": 63488 00:21:54.875 } 00:21:54.875 ] 00:21:54.875 }' 00:21:54.875 12:06:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.875 12:06:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.809 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:55.809 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:55.809 12:06:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:55.809 [2024-11-29 12:06:01.291248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:55.809 [2024-11-29 12:06:01.291370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.809 [2024-11-29 12:06:01.291417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:55.809 [2024-11-29 12:06:01.291445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.809 [2024-11-29 12:06:01.291946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.809 [2024-11-29 12:06:01.292014] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:55.809 [2024-11-29 12:06:01.292112] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:55.809 [2024-11-29 12:06:01.292140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.809 pt2 00:21:55.809 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:55.809 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:55.809 12:06:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:56.066 [2024-11-29 12:06:01.575323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:56.066 [2024-11-29 12:06:01.575447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.066 [2024-11-29 12:06:01.575487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:56.066 [2024-11-29 12:06:01.575519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.066 [2024-11-29 12:06:01.576018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.066 [2024-11-29 12:06:01.576075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:56.066 [2024-11-29 12:06:01.576168] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:56.066 [2024-11-29 12:06:01.576207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:56.066 pt3 00:21:56.325 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:56.325 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:56.325 12:06:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:56.582 [2024-11-29 12:06:01.867430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:56.582 [2024-11-29 12:06:01.867564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.582 [2024-11-29 12:06:01.867624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:56.582 [2024-11-29 12:06:01.867661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.582 [2024-11-29 12:06:01.868234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.582 [2024-11-29 12:06:01.868341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:56.582 [2024-11-29 12:06:01.868437] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:56.582 [2024-11-29 12:06:01.868466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:56.582 [2024-11-29 12:06:01.868619] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:56.582 [2024-11-29 12:06:01.868633] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:56.582 [2024-11-29 12:06:01.868723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:21:56.582 [2024-11-29 12:06:01.869097] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:56.582 [2024-11-29 12:06:01.869124] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:56.582 [2024-11-29 12:06:01.869239] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.582 pt4 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.582 12:06:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.841 12:06:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:56.841 "name": "raid_bdev1", 00:21:56.841 "uuid": "f86928e3-817d-4191-a159-7c604d4521fb", 00:21:56.841 "strip_size_kb": 64, 00:21:56.841 "state": "online", 00:21:56.841 "raid_level": "concat", 00:21:56.841 "superblock": true, 00:21:56.841 "num_base_bdevs": 4, 00:21:56.841 "num_base_bdevs_discovered": 4, 00:21:56.841 "num_base_bdevs_operational": 4, 00:21:56.841 "base_bdevs_list": [ 00:21:56.841 { 00:21:56.841 "name": "pt1", 00:21:56.841 "uuid": "3db420ed-f422-51d0-af1e-967209b3204e", 00:21:56.841 "is_configured": true, 00:21:56.841 "data_offset": 2048, 00:21:56.841 "data_size": 63488 00:21:56.841 }, 00:21:56.841 { 00:21:56.841 "name": "pt2", 00:21:56.841 "uuid": "b4459ca2-48f0-59d1-b089-b7c9cddf2ace", 00:21:56.841 "is_configured": true, 00:21:56.841 "data_offset": 2048, 00:21:56.841 "data_size": 63488 00:21:56.841 }, 00:21:56.841 { 00:21:56.841 "name": "pt3", 00:21:56.841 "uuid": "4da7ef00-a073-54c1-9e7c-13fd55d04bc6", 00:21:56.841 "is_configured": true, 00:21:56.841 "data_offset": 2048, 00:21:56.841 "data_size": 63488 00:21:56.841 }, 00:21:56.841 { 00:21:56.841 "name": "pt4", 00:21:56.841 "uuid": "8f61522e-f915-5a2c-adc9-3a4d6197290a", 00:21:56.841 "is_configured": true, 00:21:56.841 "data_offset": 2048, 00:21:56.841 "data_size": 63488 00:21:56.841 } 00:21:56.841 ] 00:21:56.841 }' 00:21:56.841 12:06:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:56.841 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:21:57.407 12:06:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:57.407 12:06:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:57.666 [2024-11-29 12:06:03.103906] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.666 12:06:03 -- bdev/bdev_raid.sh@430 -- # '[' f86928e3-817d-4191-a159-7c604d4521fb '!=' f86928e3-817d-4191-a159-7c604d4521fb ']' 00:21:57.666 12:06:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:21:57.666 12:06:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:57.666 12:06:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:57.666 12:06:03 -- bdev/bdev_raid.sh@511 -- # killprocess 131904 00:21:57.666 12:06:03 -- common/autotest_common.sh@936 -- # '[' -z 131904 ']' 00:21:57.666 12:06:03 -- common/autotest_common.sh@940 -- # kill -0 131904 00:21:57.666 12:06:03 -- common/autotest_common.sh@941 -- # uname 00:21:57.666 12:06:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.666 12:06:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131904 00:21:57.666 12:06:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:57.666 12:06:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:57.666 12:06:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131904' 00:21:57.666 killing process with pid 131904 00:21:57.666 12:06:03 -- common/autotest_common.sh@955 -- # kill 131904 00:21:57.666 12:06:03 -- common/autotest_common.sh@960 -- # wait 131904 00:21:57.666 [2024-11-29 12:06:03.155131] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.666 [2024-11-29 12:06:03.155252] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.666 [2024-11-29 12:06:03.155340] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.666 [2024-11-29 12:06:03.155507] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:57.925 [2024-11-29 12:06:03.212179] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:58.184 00:21:58.184 real 0m12.272s 00:21:58.184 user 0m22.389s 00:21:58.184 sys 0m1.561s 00:21:58.184 12:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:58.184 12:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:58.184 ************************************ 00:21:58.184 END TEST raid_superblock_test 00:21:58.184 ************************************ 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:21:58.184 12:06:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:58.184 12:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.184 12:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:58.184 ************************************ 00:21:58.184 START TEST raid_state_function_test 00:21:58.184 ************************************ 00:21:58.184 12:06:03 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=132239 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132239' 00:21:58.184 Process raid pid: 132239 00:21:58.184 12:06:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132239 /var/tmp/spdk-raid.sock 00:21:58.184 12:06:03 -- common/autotest_common.sh@829 -- # '[' -z 132239 ']' 00:21:58.184 12:06:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:58.184 12:06:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:58.184 12:06:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:58.184 12:06:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.184 12:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:58.184 [2024-11-29 12:06:03.577800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:58.184 [2024-11-29 12:06:03.578064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.443 [2024-11-29 12:06:03.723308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.443 [2024-11-29 12:06:03.828682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.443 [2024-11-29 12:06:03.884704] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.377 12:06:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.377 12:06:04 -- common/autotest_common.sh@862 -- # return 0 00:21:59.377 12:06:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:59.377 [2024-11-29 12:06:04.871420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.377 [2024-11-29 12:06:04.871545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.377 [2024-11-29 12:06:04.871561] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.377 [2024-11-29 12:06:04.871583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.377 [2024-11-29 12:06:04.871591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.377 [2024-11-29 12:06:04.871652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.377 [2024-11-29 12:06:04.871663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:59.377 [2024-11-29 12:06:04.871692] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.635 12:06:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.892 12:06:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.892 "name": "Existed_Raid", 00:21:59.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.892 "strip_size_kb": 0, 00:21:59.892 "state": "configuring", 00:21:59.892 "raid_level": "raid1", 00:21:59.892 "superblock": false, 00:21:59.892 "num_base_bdevs": 4, 00:21:59.892 "num_base_bdevs_discovered": 0, 00:21:59.892 "num_base_bdevs_operational": 4, 00:21:59.892 "base_bdevs_list": [ 00:21:59.892 { 00:21:59.892 "name": "BaseBdev1", 00:21:59.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.892 "is_configured": false, 00:21:59.892 "data_offset": 0, 00:21:59.892 "data_size": 0 00:21:59.892 }, 00:21:59.892 { 00:21:59.892 "name": "BaseBdev2", 00:21:59.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.892 "is_configured": false, 00:21:59.892 "data_offset": 0, 00:21:59.892 "data_size": 0 00:21:59.892 }, 00:21:59.892 { 00:21:59.892 "name": "BaseBdev3", 00:21:59.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.892 "is_configured": false, 00:21:59.892 "data_offset": 0, 00:21:59.892 "data_size": 0 00:21:59.892 }, 00:21:59.892 { 00:21:59.892 "name": "BaseBdev4", 00:21:59.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.892 "is_configured": false, 00:21:59.892 "data_offset": 0, 00:21:59.892 "data_size": 0 00:21:59.892 } 00:21:59.892 ] 00:21:59.892 }' 00:21:59.892 12:06:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.892 12:06:05 -- common/autotest_common.sh@10 -- # set +x 00:22:00.457 12:06:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:00.714 [2024-11-29 12:06:06.139497] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:00.714 [2024-11-29 12:06:06.139571] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:00.714 12:06:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:00.972 [2024-11-29 12:06:06.375592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:00.972 [2024-11-29 12:06:06.375686] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:00.972 [2024-11-29 12:06:06.375700] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:00.972 [2024-11-29 12:06:06.375729] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:00.972 [2024-11-29 12:06:06.375738] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:00.972 [2024-11-29 12:06:06.375757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:00.972 [2024-11-29 12:06:06.375765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:00.972 [2024-11-29 12:06:06.375792] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:00.972 12:06:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:01.230 [2024-11-29 12:06:06.647775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.230 BaseBdev1 00:22:01.230 12:06:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:01.230 12:06:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:01.230 12:06:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:01.230 12:06:06 -- common/autotest_common.sh@899 -- # local i 00:22:01.230 12:06:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:01.230 12:06:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:01.230 12:06:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.489 12:06:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:01.747 [ 00:22:01.747 { 00:22:01.747 "name": "BaseBdev1", 00:22:01.747 "aliases": [ 00:22:01.747 "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499" 00:22:01.747 ], 00:22:01.747 "product_name": "Malloc disk", 00:22:01.747 "block_size": 512, 00:22:01.747 "num_blocks": 65536, 00:22:01.747 "uuid": "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499", 00:22:01.747 "assigned_rate_limits": { 00:22:01.747 "rw_ios_per_sec": 0, 00:22:01.747 "rw_mbytes_per_sec": 0, 00:22:01.747 "r_mbytes_per_sec": 0, 00:22:01.747 "w_mbytes_per_sec": 0 00:22:01.747 }, 00:22:01.747 "claimed": true, 00:22:01.747 "claim_type": "exclusive_write", 00:22:01.747 "zoned": false, 00:22:01.747 "supported_io_types": { 00:22:01.747 "read": true, 00:22:01.747 "write": true, 00:22:01.747 "unmap": true, 00:22:01.747 "write_zeroes": true, 00:22:01.747 "flush": true, 00:22:01.747 "reset": true, 00:22:01.747 "compare": false, 00:22:01.747 "compare_and_write": false, 00:22:01.747 "abort": true, 00:22:01.747 "nvme_admin": false, 00:22:01.747 "nvme_io": false 00:22:01.747 }, 00:22:01.747 "memory_domains": [ 00:22:01.747 { 00:22:01.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.747 "dma_device_type": 2 00:22:01.747 } 00:22:01.747 ], 00:22:01.747 "driver_specific": {} 00:22:01.747 } 00:22:01.747 ] 00:22:01.747 12:06:07 -- common/autotest_common.sh@905 -- # return 0 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.747 12:06:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.005 12:06:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.005 "name": "Existed_Raid", 00:22:02.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.005 "strip_size_kb": 0, 00:22:02.005 "state": "configuring", 00:22:02.005 "raid_level": "raid1", 00:22:02.005 "superblock": false, 00:22:02.005 "num_base_bdevs": 4, 00:22:02.005 "num_base_bdevs_discovered": 1, 00:22:02.005 "num_base_bdevs_operational": 4, 00:22:02.005 "base_bdevs_list": [ 00:22:02.005 { 00:22:02.005 "name": "BaseBdev1", 00:22:02.005 "uuid": "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499", 00:22:02.005 "is_configured": true, 00:22:02.005 "data_offset": 0, 00:22:02.005 "data_size": 65536 00:22:02.005 }, 00:22:02.005 { 00:22:02.005 "name": "BaseBdev2", 00:22:02.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.005 "is_configured": false, 00:22:02.005 "data_offset": 0, 00:22:02.005 "data_size": 0 00:22:02.005 }, 00:22:02.005 { 00:22:02.005 "name": "BaseBdev3", 00:22:02.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.005 "is_configured": false, 00:22:02.005 "data_offset": 0, 00:22:02.005 "data_size": 0 00:22:02.005 }, 00:22:02.005 { 00:22:02.005 "name": "BaseBdev4", 00:22:02.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.005 "is_configured": false, 00:22:02.005 "data_offset": 0, 00:22:02.005 "data_size": 0 00:22:02.005 } 00:22:02.005 ] 00:22:02.005 }' 00:22:02.005 12:06:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.005 12:06:07 -- common/autotest_common.sh@10 -- # set +x 00:22:02.939 12:06:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:02.939 [2024-11-29 12:06:08.372173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:02.939 [2024-11-29 12:06:08.372279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:02.939 12:06:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:02.939 12:06:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:03.197 [2024-11-29 12:06:08.608315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.197 [2024-11-29 12:06:08.610637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:03.197 [2024-11-29 12:06:08.610733] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:03.197 [2024-11-29 12:06:08.610747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:03.197 [2024-11-29 12:06:08.610775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:03.197 [2024-11-29 12:06:08.610784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:03.197 [2024-11-29 12:06:08.610802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.197 12:06:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.456 12:06:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.456 "name": "Existed_Raid", 00:22:03.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.456 "strip_size_kb": 0, 00:22:03.456 "state": "configuring", 00:22:03.456 "raid_level": "raid1", 00:22:03.456 "superblock": false, 00:22:03.456 "num_base_bdevs": 4, 00:22:03.456 "num_base_bdevs_discovered": 1, 00:22:03.456 "num_base_bdevs_operational": 4, 00:22:03.456 "base_bdevs_list": [ 00:22:03.456 { 00:22:03.456 "name": "BaseBdev1", 00:22:03.456 "uuid": "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499", 00:22:03.456 "is_configured": true, 00:22:03.456 "data_offset": 0, 00:22:03.456 "data_size": 65536 00:22:03.456 }, 00:22:03.456 { 00:22:03.456 "name": "BaseBdev2", 00:22:03.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.456 "is_configured": false, 00:22:03.456 "data_offset": 0, 00:22:03.456 "data_size": 0 00:22:03.456 }, 00:22:03.456 { 00:22:03.456 "name": "BaseBdev3", 00:22:03.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.456 "is_configured": false, 00:22:03.456 "data_offset": 0, 00:22:03.456 "data_size": 0 00:22:03.456 }, 00:22:03.456 { 00:22:03.456 "name": "BaseBdev4", 00:22:03.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.456 "is_configured": false, 00:22:03.456 "data_offset": 0, 00:22:03.456 "data_size": 0 00:22:03.456 } 00:22:03.456 ] 00:22:03.456 }' 00:22:03.456 12:06:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.456 12:06:08 -- common/autotest_common.sh@10 -- # set +x 00:22:04.389 12:06:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:04.389 [2024-11-29 12:06:09.898828] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:04.389 BaseBdev2 00:22:04.647 12:06:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:04.647 12:06:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:04.647 12:06:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:04.647 12:06:09 -- common/autotest_common.sh@899 -- # local i 00:22:04.647 12:06:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:04.647 12:06:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:04.647 12:06:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.904 12:06:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:05.162 [ 00:22:05.162 { 00:22:05.162 "name": "BaseBdev2", 00:22:05.162 "aliases": [ 00:22:05.162 "9778a6aa-7911-404d-9f92-499d0a5850f0" 00:22:05.162 ], 00:22:05.162 "product_name": "Malloc disk", 00:22:05.162 "block_size": 512, 00:22:05.162 "num_blocks": 65536, 00:22:05.162 "uuid": "9778a6aa-7911-404d-9f92-499d0a5850f0", 00:22:05.162 "assigned_rate_limits": { 00:22:05.162 "rw_ios_per_sec": 0, 00:22:05.162 "rw_mbytes_per_sec": 0, 00:22:05.162 "r_mbytes_per_sec": 0, 00:22:05.162 "w_mbytes_per_sec": 0 00:22:05.162 }, 00:22:05.162 "claimed": true, 00:22:05.162 "claim_type": "exclusive_write", 00:22:05.162 "zoned": false, 00:22:05.162 "supported_io_types": { 00:22:05.162 "read": true, 00:22:05.162 "write": true, 00:22:05.162 "unmap": true, 00:22:05.162 "write_zeroes": true, 00:22:05.162 "flush": true, 00:22:05.162 "reset": true, 00:22:05.162 "compare": false, 00:22:05.162 "compare_and_write": false, 00:22:05.162 "abort": true, 00:22:05.162 "nvme_admin": false, 00:22:05.162 "nvme_io": false 00:22:05.162 }, 00:22:05.162 "memory_domains": [ 00:22:05.162 { 00:22:05.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.162 "dma_device_type": 2 00:22:05.162 } 00:22:05.162 ], 00:22:05.162 "driver_specific": {} 00:22:05.162 } 00:22:05.162 ] 00:22:05.162 12:06:10 -- common/autotest_common.sh@905 -- # return 0 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.162 12:06:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.420 12:06:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.420 "name": "Existed_Raid", 00:22:05.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.420 "strip_size_kb": 0, 00:22:05.420 "state": "configuring", 00:22:05.420 "raid_level": "raid1", 00:22:05.420 "superblock": false, 00:22:05.420 "num_base_bdevs": 4, 00:22:05.421 "num_base_bdevs_discovered": 2, 00:22:05.421 "num_base_bdevs_operational": 4, 00:22:05.421 "base_bdevs_list": [ 00:22:05.421 { 00:22:05.421 "name": "BaseBdev1", 00:22:05.421 "uuid": "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499", 00:22:05.421 "is_configured": true, 00:22:05.421 "data_offset": 0, 00:22:05.421 "data_size": 65536 00:22:05.421 }, 00:22:05.421 { 00:22:05.421 "name": "BaseBdev2", 00:22:05.421 "uuid": "9778a6aa-7911-404d-9f92-499d0a5850f0", 00:22:05.421 "is_configured": true, 00:22:05.421 "data_offset": 0, 00:22:05.421 "data_size": 65536 00:22:05.421 }, 00:22:05.421 { 00:22:05.421 "name": "BaseBdev3", 00:22:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.421 "is_configured": false, 00:22:05.421 "data_offset": 0, 00:22:05.421 "data_size": 0 00:22:05.421 }, 00:22:05.421 { 00:22:05.421 "name": "BaseBdev4", 00:22:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.421 "is_configured": false, 00:22:05.421 "data_offset": 0, 00:22:05.421 "data_size": 0 00:22:05.421 } 00:22:05.421 ] 00:22:05.421 }' 00:22:05.421 12:06:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.421 12:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:06.009 12:06:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:06.268 [2024-11-29 12:06:11.619966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:06.268 BaseBdev3 00:22:06.268 12:06:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:06.268 12:06:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:06.268 12:06:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:06.268 12:06:11 -- common/autotest_common.sh@899 -- # local i 00:22:06.268 12:06:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:06.268 12:06:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:06.268 12:06:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:06.526 12:06:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:06.785 [ 00:22:06.785 { 00:22:06.785 "name": "BaseBdev3", 00:22:06.785 "aliases": [ 00:22:06.785 "71ccfb8b-4111-4f42-bed9-9370779316a8" 00:22:06.785 ], 00:22:06.785 "product_name": "Malloc disk", 00:22:06.785 "block_size": 512, 00:22:06.785 "num_blocks": 65536, 00:22:06.785 "uuid": "71ccfb8b-4111-4f42-bed9-9370779316a8", 00:22:06.785 "assigned_rate_limits": { 00:22:06.785 "rw_ios_per_sec": 0, 00:22:06.785 "rw_mbytes_per_sec": 0, 00:22:06.785 "r_mbytes_per_sec": 0, 00:22:06.785 "w_mbytes_per_sec": 0 00:22:06.785 }, 00:22:06.785 "claimed": true, 00:22:06.785 "claim_type": "exclusive_write", 00:22:06.785 "zoned": false, 00:22:06.785 "supported_io_types": { 00:22:06.785 "read": true, 00:22:06.785 "write": true, 00:22:06.785 "unmap": true, 00:22:06.785 "write_zeroes": true, 00:22:06.785 "flush": true, 00:22:06.785 "reset": true, 00:22:06.785 "compare": false, 00:22:06.785 "compare_and_write": false, 00:22:06.785 "abort": true, 00:22:06.785 "nvme_admin": false, 00:22:06.785 "nvme_io": false 00:22:06.785 }, 00:22:06.785 "memory_domains": [ 00:22:06.785 { 00:22:06.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.785 "dma_device_type": 2 00:22:06.785 } 00:22:06.785 ], 00:22:06.785 "driver_specific": {} 00:22:06.785 } 00:22:06.785 ] 00:22:06.785 12:06:12 -- common/autotest_common.sh@905 -- # return 0 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.785 12:06:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.043 12:06:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.043 "name": "Existed_Raid", 00:22:07.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.043 "strip_size_kb": 0, 00:22:07.043 "state": "configuring", 00:22:07.043 "raid_level": "raid1", 00:22:07.043 "superblock": false, 00:22:07.043 "num_base_bdevs": 4, 00:22:07.043 "num_base_bdevs_discovered": 3, 00:22:07.043 "num_base_bdevs_operational": 4, 00:22:07.043 "base_bdevs_list": [ 00:22:07.043 { 00:22:07.043 "name": "BaseBdev1", 00:22:07.043 "uuid": "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499", 00:22:07.043 "is_configured": true, 00:22:07.043 "data_offset": 0, 00:22:07.043 "data_size": 65536 00:22:07.043 }, 00:22:07.043 { 00:22:07.043 "name": "BaseBdev2", 00:22:07.043 "uuid": "9778a6aa-7911-404d-9f92-499d0a5850f0", 00:22:07.043 "is_configured": true, 00:22:07.043 "data_offset": 0, 00:22:07.043 "data_size": 65536 00:22:07.043 }, 00:22:07.043 { 00:22:07.043 "name": "BaseBdev3", 00:22:07.043 "uuid": "71ccfb8b-4111-4f42-bed9-9370779316a8", 00:22:07.043 "is_configured": true, 00:22:07.043 "data_offset": 0, 00:22:07.043 "data_size": 65536 00:22:07.043 }, 00:22:07.043 { 00:22:07.043 "name": "BaseBdev4", 00:22:07.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.043 "is_configured": false, 00:22:07.043 "data_offset": 0, 00:22:07.043 "data_size": 0 00:22:07.043 } 00:22:07.043 ] 00:22:07.043 }' 00:22:07.043 12:06:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.043 12:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:07.611 12:06:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:07.868 [2024-11-29 12:06:13.357482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:07.868 [2024-11-29 12:06:13.357568] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:07.868 [2024-11-29 12:06:13.357582] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:07.868 [2024-11-29 12:06:13.357738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:07.868 [2024-11-29 12:06:13.358199] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:07.868 [2024-11-29 12:06:13.358224] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:07.868 [2024-11-29 12:06:13.358511] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.868 BaseBdev4 00:22:07.868 12:06:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:07.868 12:06:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:07.868 12:06:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:07.868 12:06:13 -- common/autotest_common.sh@899 -- # local i 00:22:07.868 12:06:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:07.868 12:06:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:07.868 12:06:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.431 12:06:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:08.431 [ 00:22:08.431 { 00:22:08.431 "name": "BaseBdev4", 00:22:08.431 "aliases": [ 00:22:08.431 "8388d0fd-daf6-43cd-b812-01376b47a2b0" 00:22:08.431 ], 00:22:08.431 "product_name": "Malloc disk", 00:22:08.431 "block_size": 512, 00:22:08.431 "num_blocks": 65536, 00:22:08.431 "uuid": "8388d0fd-daf6-43cd-b812-01376b47a2b0", 00:22:08.431 "assigned_rate_limits": { 00:22:08.431 "rw_ios_per_sec": 0, 00:22:08.431 "rw_mbytes_per_sec": 0, 00:22:08.431 "r_mbytes_per_sec": 0, 00:22:08.431 "w_mbytes_per_sec": 0 00:22:08.431 }, 00:22:08.431 "claimed": true, 00:22:08.431 "claim_type": "exclusive_write", 00:22:08.431 "zoned": false, 00:22:08.431 "supported_io_types": { 00:22:08.431 "read": true, 00:22:08.431 "write": true, 00:22:08.431 "unmap": true, 00:22:08.431 "write_zeroes": true, 00:22:08.431 "flush": true, 00:22:08.431 "reset": true, 00:22:08.431 "compare": false, 00:22:08.431 "compare_and_write": false, 00:22:08.431 "abort": true, 00:22:08.431 "nvme_admin": false, 00:22:08.431 "nvme_io": false 00:22:08.431 }, 00:22:08.431 "memory_domains": [ 00:22:08.431 { 00:22:08.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.431 "dma_device_type": 2 00:22:08.431 } 00:22:08.431 ], 00:22:08.431 "driver_specific": {} 00:22:08.431 } 00:22:08.431 ] 00:22:08.431 12:06:13 -- common/autotest_common.sh@905 -- # return 0 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.431 12:06:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.688 12:06:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.688 "name": "Existed_Raid", 00:22:08.688 "uuid": "831836f8-d3c9-4ba3-b8b4-a99dc3edde6f", 00:22:08.688 "strip_size_kb": 0, 00:22:08.688 "state": "online", 00:22:08.688 "raid_level": "raid1", 00:22:08.688 "superblock": false, 00:22:08.688 "num_base_bdevs": 4, 00:22:08.689 "num_base_bdevs_discovered": 4, 00:22:08.689 "num_base_bdevs_operational": 4, 00:22:08.689 "base_bdevs_list": [ 00:22:08.689 { 00:22:08.689 "name": "BaseBdev1", 00:22:08.689 "uuid": "9ec55fd1-fd44-4fda-80c1-00ddfd8c6499", 00:22:08.689 "is_configured": true, 00:22:08.689 "data_offset": 0, 00:22:08.689 "data_size": 65536 00:22:08.689 }, 00:22:08.689 { 00:22:08.689 "name": "BaseBdev2", 00:22:08.689 "uuid": "9778a6aa-7911-404d-9f92-499d0a5850f0", 00:22:08.689 "is_configured": true, 00:22:08.689 "data_offset": 0, 00:22:08.689 "data_size": 65536 00:22:08.689 }, 00:22:08.689 { 00:22:08.689 "name": "BaseBdev3", 00:22:08.689 "uuid": "71ccfb8b-4111-4f42-bed9-9370779316a8", 00:22:08.689 "is_configured": true, 00:22:08.689 "data_offset": 0, 00:22:08.689 "data_size": 65536 00:22:08.689 }, 00:22:08.689 { 00:22:08.689 "name": "BaseBdev4", 00:22:08.689 "uuid": "8388d0fd-daf6-43cd-b812-01376b47a2b0", 00:22:08.689 "is_configured": true, 00:22:08.689 "data_offset": 0, 00:22:08.689 "data_size": 65536 00:22:08.689 } 00:22:08.689 ] 00:22:08.689 }' 00:22:08.689 12:06:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.689 12:06:14 -- common/autotest_common.sh@10 -- # set +x 00:22:09.624 12:06:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:09.624 [2024-11-29 12:06:15.086113] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.624 12:06:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.882 12:06:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.882 "name": "Existed_Raid", 00:22:09.882 "uuid": "831836f8-d3c9-4ba3-b8b4-a99dc3edde6f", 00:22:09.882 "strip_size_kb": 0, 00:22:09.882 "state": "online", 00:22:09.882 "raid_level": "raid1", 00:22:09.882 "superblock": false, 00:22:09.882 "num_base_bdevs": 4, 00:22:09.882 "num_base_bdevs_discovered": 3, 00:22:09.882 "num_base_bdevs_operational": 3, 00:22:09.882 "base_bdevs_list": [ 00:22:09.882 { 00:22:09.882 "name": null, 00:22:09.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.882 "is_configured": false, 00:22:09.882 "data_offset": 0, 00:22:09.882 "data_size": 65536 00:22:09.882 }, 00:22:09.882 { 00:22:09.882 "name": "BaseBdev2", 00:22:09.882 "uuid": "9778a6aa-7911-404d-9f92-499d0a5850f0", 00:22:09.882 "is_configured": true, 00:22:09.882 "data_offset": 0, 00:22:09.882 "data_size": 65536 00:22:09.882 }, 00:22:09.882 { 00:22:09.882 "name": "BaseBdev3", 00:22:09.882 "uuid": "71ccfb8b-4111-4f42-bed9-9370779316a8", 00:22:09.882 "is_configured": true, 00:22:09.882 "data_offset": 0, 00:22:09.882 "data_size": 65536 00:22:09.882 }, 00:22:09.882 { 00:22:09.882 "name": "BaseBdev4", 00:22:09.882 "uuid": "8388d0fd-daf6-43cd-b812-01376b47a2b0", 00:22:09.882 "is_configured": true, 00:22:09.882 "data_offset": 0, 00:22:09.882 "data_size": 65536 00:22:09.882 } 00:22:09.882 ] 00:22:09.882 }' 00:22:09.882 12:06:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.882 12:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.815 12:06:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:11.073 [2024-11-29 12:06:16.495003] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.073 12:06:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:11.073 12:06:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:11.073 12:06:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.073 12:06:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:11.331 12:06:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:11.331 12:06:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:11.332 12:06:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:11.589 [2024-11-29 12:06:16.999050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:11.590 12:06:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:11.590 12:06:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:11.590 12:06:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.590 12:06:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:11.848 12:06:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:11.848 12:06:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:11.848 12:06:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:12.106 [2024-11-29 12:06:17.558169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:12.106 [2024-11-29 12:06:17.558232] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.106 [2024-11-29 12:06:17.558332] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.106 [2024-11-29 12:06:17.573952] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.106 [2024-11-29 12:06:17.574004] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:22:12.106 12:06:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:12.106 12:06:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:12.106 12:06:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.106 12:06:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:12.364 12:06:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:12.364 12:06:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:12.364 12:06:17 -- bdev/bdev_raid.sh@287 -- # killprocess 132239 00:22:12.364 12:06:17 -- common/autotest_common.sh@936 -- # '[' -z 132239 ']' 00:22:12.364 12:06:17 -- common/autotest_common.sh@940 -- # kill -0 132239 00:22:12.364 12:06:17 -- common/autotest_common.sh@941 -- # uname 00:22:12.364 12:06:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.364 12:06:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132239 00:22:12.364 12:06:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:12.364 12:06:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:12.364 12:06:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132239' 00:22:12.364 killing process with pid 132239 00:22:12.364 12:06:17 -- common/autotest_common.sh@955 -- # kill 132239 00:22:12.364 12:06:17 -- common/autotest_common.sh@960 -- # wait 132239 00:22:12.364 [2024-11-29 12:06:17.854700] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.364 [2024-11-29 12:06:17.854945] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:12.931 00:22:12.931 real 0m14.670s 00:22:12.931 user 0m27.003s 00:22:12.931 sys 0m1.876s 00:22:12.931 12:06:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:12.931 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:12.931 ************************************ 00:22:12.931 END TEST raid_state_function_test 00:22:12.931 ************************************ 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:12.931 12:06:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:12.931 12:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:12.931 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:12.931 ************************************ 00:22:12.931 START TEST raid_state_function_test_sb 00:22:12.931 ************************************ 00:22:12.931 12:06:18 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=132684 00:22:12.931 Process raid pid: 132684 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132684' 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:12.931 12:06:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132684 /var/tmp/spdk-raid.sock 00:22:12.931 12:06:18 -- common/autotest_common.sh@829 -- # '[' -z 132684 ']' 00:22:12.931 12:06:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:12.931 12:06:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:12.931 12:06:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:12.931 12:06:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.931 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:12.931 [2024-11-29 12:06:18.317909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:12.931 [2024-11-29 12:06:18.318208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.190 [2024-11-29 12:06:18.466715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.190 [2024-11-29 12:06:18.574208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.190 [2024-11-29 12:06:18.634272] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.757 12:06:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.757 12:06:19 -- common/autotest_common.sh@862 -- # return 0 00:22:13.757 12:06:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:14.111 [2024-11-29 12:06:19.509485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:14.111 [2024-11-29 12:06:19.509605] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:14.111 [2024-11-29 12:06:19.509621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:14.111 [2024-11-29 12:06:19.509641] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:14.111 [2024-11-29 12:06:19.509649] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:14.111 [2024-11-29 12:06:19.509704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:14.111 [2024-11-29 12:06:19.509714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:14.111 [2024-11-29 12:06:19.509743] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.111 12:06:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.112 12:06:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.112 12:06:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.112 12:06:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.112 12:06:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.397 12:06:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.397 "name": "Existed_Raid", 00:22:14.397 "uuid": "7f4599f6-e0b6-45ba-b7cf-0750eba564c0", 00:22:14.397 "strip_size_kb": 0, 00:22:14.398 "state": "configuring", 00:22:14.398 "raid_level": "raid1", 00:22:14.398 "superblock": true, 00:22:14.398 "num_base_bdevs": 4, 00:22:14.398 "num_base_bdevs_discovered": 0, 00:22:14.398 "num_base_bdevs_operational": 4, 00:22:14.398 "base_bdevs_list": [ 00:22:14.398 { 00:22:14.398 "name": "BaseBdev1", 00:22:14.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.398 "is_configured": false, 00:22:14.398 "data_offset": 0, 00:22:14.398 "data_size": 0 00:22:14.398 }, 00:22:14.398 { 00:22:14.398 "name": "BaseBdev2", 00:22:14.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.398 "is_configured": false, 00:22:14.398 "data_offset": 0, 00:22:14.398 "data_size": 0 00:22:14.398 }, 00:22:14.398 { 00:22:14.398 "name": "BaseBdev3", 00:22:14.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.398 "is_configured": false, 00:22:14.398 "data_offset": 0, 00:22:14.398 "data_size": 0 00:22:14.398 }, 00:22:14.398 { 00:22:14.398 "name": "BaseBdev4", 00:22:14.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.398 "is_configured": false, 00:22:14.398 "data_offset": 0, 00:22:14.398 "data_size": 0 00:22:14.398 } 00:22:14.398 ] 00:22:14.398 }' 00:22:14.398 12:06:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.398 12:06:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.964 12:06:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.222 [2024-11-29 12:06:20.641496] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.222 [2024-11-29 12:06:20.641556] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:15.222 12:06:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:15.479 [2024-11-29 12:06:20.921639] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:15.479 [2024-11-29 12:06:20.921737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:15.479 [2024-11-29 12:06:20.921750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.480 [2024-11-29 12:06:20.921779] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.480 [2024-11-29 12:06:20.921789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:15.480 [2024-11-29 12:06:20.921810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:15.480 [2024-11-29 12:06:20.921818] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:15.480 [2024-11-29 12:06:20.921845] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:15.480 12:06:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:15.739 [2024-11-29 12:06:21.213588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:15.739 BaseBdev1 00:22:15.739 12:06:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:15.739 12:06:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:15.739 12:06:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:15.739 12:06:21 -- common/autotest_common.sh@899 -- # local i 00:22:15.739 12:06:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:15.739 12:06:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:15.739 12:06:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.997 12:06:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:16.254 [ 00:22:16.254 { 00:22:16.254 "name": "BaseBdev1", 00:22:16.254 "aliases": [ 00:22:16.254 "3784d75f-8e4f-46b5-9dc3-f3e35725d879" 00:22:16.254 ], 00:22:16.254 "product_name": "Malloc disk", 00:22:16.254 "block_size": 512, 00:22:16.254 "num_blocks": 65536, 00:22:16.254 "uuid": "3784d75f-8e4f-46b5-9dc3-f3e35725d879", 00:22:16.254 "assigned_rate_limits": { 00:22:16.254 "rw_ios_per_sec": 0, 00:22:16.254 "rw_mbytes_per_sec": 0, 00:22:16.254 "r_mbytes_per_sec": 0, 00:22:16.254 "w_mbytes_per_sec": 0 00:22:16.254 }, 00:22:16.254 "claimed": true, 00:22:16.254 "claim_type": "exclusive_write", 00:22:16.254 "zoned": false, 00:22:16.254 "supported_io_types": { 00:22:16.254 "read": true, 00:22:16.254 "write": true, 00:22:16.254 "unmap": true, 00:22:16.254 "write_zeroes": true, 00:22:16.254 "flush": true, 00:22:16.254 "reset": true, 00:22:16.254 "compare": false, 00:22:16.254 "compare_and_write": false, 00:22:16.254 "abort": true, 00:22:16.254 "nvme_admin": false, 00:22:16.254 "nvme_io": false 00:22:16.254 }, 00:22:16.254 "memory_domains": [ 00:22:16.254 { 00:22:16.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.254 "dma_device_type": 2 00:22:16.254 } 00:22:16.254 ], 00:22:16.254 "driver_specific": {} 00:22:16.254 } 00:22:16.254 ] 00:22:16.254 12:06:21 -- common/autotest_common.sh@905 -- # return 0 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:16.254 12:06:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.255 12:06:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.255 12:06:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.255 12:06:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.255 12:06:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.255 12:06:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.513 12:06:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.513 "name": "Existed_Raid", 00:22:16.513 "uuid": "6ca146df-6886-4165-9822-dd1bd4067751", 00:22:16.513 "strip_size_kb": 0, 00:22:16.513 "state": "configuring", 00:22:16.513 "raid_level": "raid1", 00:22:16.513 "superblock": true, 00:22:16.513 "num_base_bdevs": 4, 00:22:16.513 "num_base_bdevs_discovered": 1, 00:22:16.513 "num_base_bdevs_operational": 4, 00:22:16.513 "base_bdevs_list": [ 00:22:16.513 { 00:22:16.513 "name": "BaseBdev1", 00:22:16.513 "uuid": "3784d75f-8e4f-46b5-9dc3-f3e35725d879", 00:22:16.513 "is_configured": true, 00:22:16.513 "data_offset": 2048, 00:22:16.513 "data_size": 63488 00:22:16.513 }, 00:22:16.513 { 00:22:16.513 "name": "BaseBdev2", 00:22:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.513 "is_configured": false, 00:22:16.513 "data_offset": 0, 00:22:16.513 "data_size": 0 00:22:16.513 }, 00:22:16.513 { 00:22:16.513 "name": "BaseBdev3", 00:22:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.513 "is_configured": false, 00:22:16.513 "data_offset": 0, 00:22:16.513 "data_size": 0 00:22:16.513 }, 00:22:16.513 { 00:22:16.513 "name": "BaseBdev4", 00:22:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.513 "is_configured": false, 00:22:16.513 "data_offset": 0, 00:22:16.513 "data_size": 0 00:22:16.513 } 00:22:16.513 ] 00:22:16.513 }' 00:22:16.513 12:06:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.513 12:06:21 -- common/autotest_common.sh@10 -- # set +x 00:22:17.446 12:06:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:17.446 [2024-11-29 12:06:22.886025] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.446 [2024-11-29 12:06:22.886133] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:17.446 12:06:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:17.446 12:06:22 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:18.013 12:06:23 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:18.013 BaseBdev1 00:22:18.013 12:06:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:18.013 12:06:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:18.013 12:06:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:18.013 12:06:23 -- common/autotest_common.sh@899 -- # local i 00:22:18.013 12:06:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:18.013 12:06:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:18.013 12:06:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.578 12:06:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:18.578 [ 00:22:18.578 { 00:22:18.578 "name": "BaseBdev1", 00:22:18.578 "aliases": [ 00:22:18.578 "81c06b45-23be-4adc-8581-f3f097a0c426" 00:22:18.578 ], 00:22:18.578 "product_name": "Malloc disk", 00:22:18.578 "block_size": 512, 00:22:18.578 "num_blocks": 65536, 00:22:18.578 "uuid": "81c06b45-23be-4adc-8581-f3f097a0c426", 00:22:18.578 "assigned_rate_limits": { 00:22:18.578 "rw_ios_per_sec": 0, 00:22:18.578 "rw_mbytes_per_sec": 0, 00:22:18.578 "r_mbytes_per_sec": 0, 00:22:18.578 "w_mbytes_per_sec": 0 00:22:18.578 }, 00:22:18.578 "claimed": false, 00:22:18.578 "zoned": false, 00:22:18.578 "supported_io_types": { 00:22:18.578 "read": true, 00:22:18.578 "write": true, 00:22:18.578 "unmap": true, 00:22:18.578 "write_zeroes": true, 00:22:18.578 "flush": true, 00:22:18.578 "reset": true, 00:22:18.578 "compare": false, 00:22:18.578 "compare_and_write": false, 00:22:18.578 "abort": true, 00:22:18.578 "nvme_admin": false, 00:22:18.578 "nvme_io": false 00:22:18.578 }, 00:22:18.578 "memory_domains": [ 00:22:18.578 { 00:22:18.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.578 "dma_device_type": 2 00:22:18.578 } 00:22:18.578 ], 00:22:18.578 "driver_specific": {} 00:22:18.578 } 00:22:18.578 ] 00:22:18.578 12:06:24 -- common/autotest_common.sh@905 -- # return 0 00:22:18.578 12:06:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:18.837 [2024-11-29 12:06:24.308802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.837 [2024-11-29 12:06:24.311098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:18.837 [2024-11-29 12:06:24.311195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:18.837 [2024-11-29 12:06:24.311209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:18.837 [2024-11-29 12:06:24.311237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:18.837 [2024-11-29 12:06:24.311247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:18.837 [2024-11-29 12:06:24.311265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.837 12:06:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.095 12:06:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.095 "name": "Existed_Raid", 00:22:19.095 "uuid": "c9ebf5a4-68b6-4534-92cf-24a1b5e63d48", 00:22:19.095 "strip_size_kb": 0, 00:22:19.095 "state": "configuring", 00:22:19.095 "raid_level": "raid1", 00:22:19.095 "superblock": true, 00:22:19.095 "num_base_bdevs": 4, 00:22:19.095 "num_base_bdevs_discovered": 1, 00:22:19.095 "num_base_bdevs_operational": 4, 00:22:19.095 "base_bdevs_list": [ 00:22:19.095 { 00:22:19.095 "name": "BaseBdev1", 00:22:19.095 "uuid": "81c06b45-23be-4adc-8581-f3f097a0c426", 00:22:19.095 "is_configured": true, 00:22:19.095 "data_offset": 2048, 00:22:19.095 "data_size": 63488 00:22:19.095 }, 00:22:19.095 { 00:22:19.095 "name": "BaseBdev2", 00:22:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.095 "is_configured": false, 00:22:19.095 "data_offset": 0, 00:22:19.095 "data_size": 0 00:22:19.095 }, 00:22:19.095 { 00:22:19.095 "name": "BaseBdev3", 00:22:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.095 "is_configured": false, 00:22:19.095 "data_offset": 0, 00:22:19.095 "data_size": 0 00:22:19.095 }, 00:22:19.095 { 00:22:19.095 "name": "BaseBdev4", 00:22:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.095 "is_configured": false, 00:22:19.095 "data_offset": 0, 00:22:19.095 "data_size": 0 00:22:19.095 } 00:22:19.095 ] 00:22:19.095 }' 00:22:19.095 12:06:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.095 12:06:24 -- common/autotest_common.sh@10 -- # set +x 00:22:20.031 12:06:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:20.289 [2024-11-29 12:06:25.554807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.289 BaseBdev2 00:22:20.289 12:06:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:20.289 12:06:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:20.289 12:06:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:20.289 12:06:25 -- common/autotest_common.sh@899 -- # local i 00:22:20.289 12:06:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:20.289 12:06:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:20.289 12:06:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:20.548 12:06:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:20.548 [ 00:22:20.548 { 00:22:20.548 "name": "BaseBdev2", 00:22:20.548 "aliases": [ 00:22:20.548 "87a46fe5-d0f4-4da8-a66e-bb84545966f2" 00:22:20.548 ], 00:22:20.548 "product_name": "Malloc disk", 00:22:20.548 "block_size": 512, 00:22:20.548 "num_blocks": 65536, 00:22:20.548 "uuid": "87a46fe5-d0f4-4da8-a66e-bb84545966f2", 00:22:20.548 "assigned_rate_limits": { 00:22:20.548 "rw_ios_per_sec": 0, 00:22:20.548 "rw_mbytes_per_sec": 0, 00:22:20.548 "r_mbytes_per_sec": 0, 00:22:20.548 "w_mbytes_per_sec": 0 00:22:20.548 }, 00:22:20.548 "claimed": true, 00:22:20.548 "claim_type": "exclusive_write", 00:22:20.548 "zoned": false, 00:22:20.548 "supported_io_types": { 00:22:20.548 "read": true, 00:22:20.548 "write": true, 00:22:20.548 "unmap": true, 00:22:20.548 "write_zeroes": true, 00:22:20.548 "flush": true, 00:22:20.548 "reset": true, 00:22:20.548 "compare": false, 00:22:20.548 "compare_and_write": false, 00:22:20.548 "abort": true, 00:22:20.548 "nvme_admin": false, 00:22:20.548 "nvme_io": false 00:22:20.548 }, 00:22:20.548 "memory_domains": [ 00:22:20.548 { 00:22:20.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.548 "dma_device_type": 2 00:22:20.548 } 00:22:20.548 ], 00:22:20.548 "driver_specific": {} 00:22:20.548 } 00:22:20.548 ] 00:22:20.548 12:06:26 -- common/autotest_common.sh@905 -- # return 0 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.548 12:06:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.807 12:06:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.807 "name": "Existed_Raid", 00:22:20.807 "uuid": "c9ebf5a4-68b6-4534-92cf-24a1b5e63d48", 00:22:20.807 "strip_size_kb": 0, 00:22:20.807 "state": "configuring", 00:22:20.807 "raid_level": "raid1", 00:22:20.807 "superblock": true, 00:22:20.807 "num_base_bdevs": 4, 00:22:20.807 "num_base_bdevs_discovered": 2, 00:22:20.807 "num_base_bdevs_operational": 4, 00:22:20.807 "base_bdevs_list": [ 00:22:20.807 { 00:22:20.807 "name": "BaseBdev1", 00:22:20.807 "uuid": "81c06b45-23be-4adc-8581-f3f097a0c426", 00:22:20.807 "is_configured": true, 00:22:20.807 "data_offset": 2048, 00:22:20.807 "data_size": 63488 00:22:20.807 }, 00:22:20.807 { 00:22:20.807 "name": "BaseBdev2", 00:22:20.807 "uuid": "87a46fe5-d0f4-4da8-a66e-bb84545966f2", 00:22:20.807 "is_configured": true, 00:22:20.807 "data_offset": 2048, 00:22:20.807 "data_size": 63488 00:22:20.807 }, 00:22:20.807 { 00:22:20.807 "name": "BaseBdev3", 00:22:20.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.807 "is_configured": false, 00:22:20.807 "data_offset": 0, 00:22:20.807 "data_size": 0 00:22:20.807 }, 00:22:20.807 { 00:22:20.807 "name": "BaseBdev4", 00:22:20.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.807 "is_configured": false, 00:22:20.807 "data_offset": 0, 00:22:20.807 "data_size": 0 00:22:20.807 } 00:22:20.807 ] 00:22:20.807 }' 00:22:20.807 12:06:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.807 12:06:26 -- common/autotest_common.sh@10 -- # set +x 00:22:21.740 12:06:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:21.740 [2024-11-29 12:06:27.232537] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.740 BaseBdev3 00:22:21.740 12:06:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:21.740 12:06:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:21.740 12:06:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:21.740 12:06:27 -- common/autotest_common.sh@899 -- # local i 00:22:21.740 12:06:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:21.740 12:06:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:21.997 12:06:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.255 12:06:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:22.255 [ 00:22:22.255 { 00:22:22.255 "name": "BaseBdev3", 00:22:22.255 "aliases": [ 00:22:22.255 "14e97c5f-8e44-4629-be09-7ab153687274" 00:22:22.255 ], 00:22:22.255 "product_name": "Malloc disk", 00:22:22.255 "block_size": 512, 00:22:22.255 "num_blocks": 65536, 00:22:22.255 "uuid": "14e97c5f-8e44-4629-be09-7ab153687274", 00:22:22.255 "assigned_rate_limits": { 00:22:22.255 "rw_ios_per_sec": 0, 00:22:22.255 "rw_mbytes_per_sec": 0, 00:22:22.255 "r_mbytes_per_sec": 0, 00:22:22.255 "w_mbytes_per_sec": 0 00:22:22.255 }, 00:22:22.255 "claimed": true, 00:22:22.255 "claim_type": "exclusive_write", 00:22:22.255 "zoned": false, 00:22:22.255 "supported_io_types": { 00:22:22.255 "read": true, 00:22:22.255 "write": true, 00:22:22.255 "unmap": true, 00:22:22.255 "write_zeroes": true, 00:22:22.255 "flush": true, 00:22:22.255 "reset": true, 00:22:22.255 "compare": false, 00:22:22.255 "compare_and_write": false, 00:22:22.255 "abort": true, 00:22:22.255 "nvme_admin": false, 00:22:22.255 "nvme_io": false 00:22:22.255 }, 00:22:22.255 "memory_domains": [ 00:22:22.255 { 00:22:22.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.255 "dma_device_type": 2 00:22:22.255 } 00:22:22.255 ], 00:22:22.255 "driver_specific": {} 00:22:22.255 } 00:22:22.255 ] 00:22:22.255 12:06:27 -- common/autotest_common.sh@905 -- # return 0 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.255 12:06:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.513 12:06:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.770 12:06:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.770 "name": "Existed_Raid", 00:22:22.770 "uuid": "c9ebf5a4-68b6-4534-92cf-24a1b5e63d48", 00:22:22.770 "strip_size_kb": 0, 00:22:22.770 "state": "configuring", 00:22:22.770 "raid_level": "raid1", 00:22:22.770 "superblock": true, 00:22:22.770 "num_base_bdevs": 4, 00:22:22.770 "num_base_bdevs_discovered": 3, 00:22:22.770 "num_base_bdevs_operational": 4, 00:22:22.770 "base_bdevs_list": [ 00:22:22.770 { 00:22:22.770 "name": "BaseBdev1", 00:22:22.770 "uuid": "81c06b45-23be-4adc-8581-f3f097a0c426", 00:22:22.770 "is_configured": true, 00:22:22.770 "data_offset": 2048, 00:22:22.770 "data_size": 63488 00:22:22.770 }, 00:22:22.770 { 00:22:22.770 "name": "BaseBdev2", 00:22:22.770 "uuid": "87a46fe5-d0f4-4da8-a66e-bb84545966f2", 00:22:22.770 "is_configured": true, 00:22:22.770 "data_offset": 2048, 00:22:22.770 "data_size": 63488 00:22:22.770 }, 00:22:22.770 { 00:22:22.770 "name": "BaseBdev3", 00:22:22.770 "uuid": "14e97c5f-8e44-4629-be09-7ab153687274", 00:22:22.770 "is_configured": true, 00:22:22.770 "data_offset": 2048, 00:22:22.770 "data_size": 63488 00:22:22.770 }, 00:22:22.770 { 00:22:22.770 "name": "BaseBdev4", 00:22:22.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.770 "is_configured": false, 00:22:22.770 "data_offset": 0, 00:22:22.770 "data_size": 0 00:22:22.770 } 00:22:22.770 ] 00:22:22.770 }' 00:22:22.770 12:06:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.770 12:06:28 -- common/autotest_common.sh@10 -- # set +x 00:22:23.335 12:06:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:23.593 [2024-11-29 12:06:28.978696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:23.593 [2024-11-29 12:06:28.978990] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:22:23.593 [2024-11-29 12:06:28.979008] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:23.593 [2024-11-29 12:06:28.979152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:22:23.593 [2024-11-29 12:06:28.979594] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:22:23.593 [2024-11-29 12:06:28.979621] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:22:23.593 [2024-11-29 12:06:28.979791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.593 BaseBdev4 00:22:23.593 12:06:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:23.593 12:06:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:23.593 12:06:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.593 12:06:28 -- common/autotest_common.sh@899 -- # local i 00:22:23.593 12:06:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.593 12:06:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.593 12:06:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.851 12:06:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:24.109 [ 00:22:24.109 { 00:22:24.109 "name": "BaseBdev4", 00:22:24.109 "aliases": [ 00:22:24.109 "d8ca186c-50ef-4380-85c7-0f509989a7d9" 00:22:24.109 ], 00:22:24.109 "product_name": "Malloc disk", 00:22:24.109 "block_size": 512, 00:22:24.109 "num_blocks": 65536, 00:22:24.109 "uuid": "d8ca186c-50ef-4380-85c7-0f509989a7d9", 00:22:24.109 "assigned_rate_limits": { 00:22:24.109 "rw_ios_per_sec": 0, 00:22:24.109 "rw_mbytes_per_sec": 0, 00:22:24.109 "r_mbytes_per_sec": 0, 00:22:24.109 "w_mbytes_per_sec": 0 00:22:24.109 }, 00:22:24.109 "claimed": true, 00:22:24.109 "claim_type": "exclusive_write", 00:22:24.109 "zoned": false, 00:22:24.109 "supported_io_types": { 00:22:24.109 "read": true, 00:22:24.109 "write": true, 00:22:24.109 "unmap": true, 00:22:24.109 "write_zeroes": true, 00:22:24.109 "flush": true, 00:22:24.109 "reset": true, 00:22:24.109 "compare": false, 00:22:24.109 "compare_and_write": false, 00:22:24.109 "abort": true, 00:22:24.109 "nvme_admin": false, 00:22:24.109 "nvme_io": false 00:22:24.109 }, 00:22:24.109 "memory_domains": [ 00:22:24.109 { 00:22:24.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.109 "dma_device_type": 2 00:22:24.109 } 00:22:24.109 ], 00:22:24.109 "driver_specific": {} 00:22:24.109 } 00:22:24.109 ] 00:22:24.109 12:06:29 -- common/autotest_common.sh@905 -- # return 0 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.109 12:06:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.366 12:06:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.366 "name": "Existed_Raid", 00:22:24.366 "uuid": "c9ebf5a4-68b6-4534-92cf-24a1b5e63d48", 00:22:24.366 "strip_size_kb": 0, 00:22:24.366 "state": "online", 00:22:24.366 "raid_level": "raid1", 00:22:24.366 "superblock": true, 00:22:24.366 "num_base_bdevs": 4, 00:22:24.366 "num_base_bdevs_discovered": 4, 00:22:24.366 "num_base_bdevs_operational": 4, 00:22:24.366 "base_bdevs_list": [ 00:22:24.366 { 00:22:24.366 "name": "BaseBdev1", 00:22:24.366 "uuid": "81c06b45-23be-4adc-8581-f3f097a0c426", 00:22:24.366 "is_configured": true, 00:22:24.366 "data_offset": 2048, 00:22:24.366 "data_size": 63488 00:22:24.366 }, 00:22:24.366 { 00:22:24.366 "name": "BaseBdev2", 00:22:24.366 "uuid": "87a46fe5-d0f4-4da8-a66e-bb84545966f2", 00:22:24.366 "is_configured": true, 00:22:24.366 "data_offset": 2048, 00:22:24.366 "data_size": 63488 00:22:24.366 }, 00:22:24.366 { 00:22:24.366 "name": "BaseBdev3", 00:22:24.366 "uuid": "14e97c5f-8e44-4629-be09-7ab153687274", 00:22:24.366 "is_configured": true, 00:22:24.366 "data_offset": 2048, 00:22:24.366 "data_size": 63488 00:22:24.366 }, 00:22:24.366 { 00:22:24.366 "name": "BaseBdev4", 00:22:24.366 "uuid": "d8ca186c-50ef-4380-85c7-0f509989a7d9", 00:22:24.366 "is_configured": true, 00:22:24.366 "data_offset": 2048, 00:22:24.366 "data_size": 63488 00:22:24.366 } 00:22:24.366 ] 00:22:24.366 }' 00:22:24.366 12:06:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.366 12:06:29 -- common/autotest_common.sh@10 -- # set +x 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:25.298 [2024-11-29 12:06:30.666993] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.298 12:06:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.556 12:06:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.556 "name": "Existed_Raid", 00:22:25.556 "uuid": "c9ebf5a4-68b6-4534-92cf-24a1b5e63d48", 00:22:25.556 "strip_size_kb": 0, 00:22:25.556 "state": "online", 00:22:25.556 "raid_level": "raid1", 00:22:25.556 "superblock": true, 00:22:25.556 "num_base_bdevs": 4, 00:22:25.556 "num_base_bdevs_discovered": 3, 00:22:25.556 "num_base_bdevs_operational": 3, 00:22:25.556 "base_bdevs_list": [ 00:22:25.556 { 00:22:25.556 "name": null, 00:22:25.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.556 "is_configured": false, 00:22:25.556 "data_offset": 2048, 00:22:25.556 "data_size": 63488 00:22:25.556 }, 00:22:25.556 { 00:22:25.556 "name": "BaseBdev2", 00:22:25.556 "uuid": "87a46fe5-d0f4-4da8-a66e-bb84545966f2", 00:22:25.556 "is_configured": true, 00:22:25.556 "data_offset": 2048, 00:22:25.556 "data_size": 63488 00:22:25.556 }, 00:22:25.556 { 00:22:25.556 "name": "BaseBdev3", 00:22:25.556 "uuid": "14e97c5f-8e44-4629-be09-7ab153687274", 00:22:25.556 "is_configured": true, 00:22:25.556 "data_offset": 2048, 00:22:25.556 "data_size": 63488 00:22:25.556 }, 00:22:25.556 { 00:22:25.556 "name": "BaseBdev4", 00:22:25.556 "uuid": "d8ca186c-50ef-4380-85c7-0f509989a7d9", 00:22:25.556 "is_configured": true, 00:22:25.556 "data_offset": 2048, 00:22:25.556 "data_size": 63488 00:22:25.556 } 00:22:25.556 ] 00:22:25.556 }' 00:22:25.556 12:06:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.556 12:06:30 -- common/autotest_common.sh@10 -- # set +x 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:26.491 12:06:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:26.750 [2024-11-29 12:06:32.224771] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:26.750 12:06:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:26.750 12:06:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:27.008 12:06:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.008 12:06:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:27.266 12:06:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:27.266 12:06:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:27.266 12:06:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:27.525 [2024-11-29 12:06:32.799063] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:27.525 12:06:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:27.525 12:06:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:27.525 12:06:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.525 12:06:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:27.783 12:06:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:27.783 12:06:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:27.783 12:06:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:28.056 [2024-11-29 12:06:33.314912] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:28.056 [2024-11-29 12:06:33.314969] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:28.056 [2024-11-29 12:06:33.315052] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.056 [2024-11-29 12:06:33.329369] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.056 [2024-11-29 12:06:33.329426] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:22:28.056 12:06:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:28.056 12:06:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:28.056 12:06:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.056 12:06:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:28.329 12:06:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:28.329 12:06:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:28.329 12:06:33 -- bdev/bdev_raid.sh@287 -- # killprocess 132684 00:22:28.329 12:06:33 -- common/autotest_common.sh@936 -- # '[' -z 132684 ']' 00:22:28.329 12:06:33 -- common/autotest_common.sh@940 -- # kill -0 132684 00:22:28.329 12:06:33 -- common/autotest_common.sh@941 -- # uname 00:22:28.329 12:06:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.329 12:06:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132684 00:22:28.329 12:06:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:28.329 12:06:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:28.329 12:06:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132684' 00:22:28.329 killing process with pid 132684 00:22:28.329 12:06:33 -- common/autotest_common.sh@955 -- # kill 132684 00:22:28.329 12:06:33 -- common/autotest_common.sh@960 -- # wait 132684 00:22:28.329 [2024-11-29 12:06:33.610584] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:28.329 [2024-11-29 12:06:33.610678] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:28.588 00:22:28.588 real 0m15.602s 00:22:28.588 user 0m28.806s 00:22:28.588 sys 0m2.048s 00:22:28.588 12:06:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:28.588 12:06:33 -- common/autotest_common.sh@10 -- # set +x 00:22:28.588 ************************************ 00:22:28.588 END TEST raid_state_function_test_sb 00:22:28.588 ************************************ 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:28.588 12:06:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:28.588 12:06:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:28.588 12:06:33 -- common/autotest_common.sh@10 -- # set +x 00:22:28.588 ************************************ 00:22:28.588 START TEST raid_superblock_test 00:22:28.588 ************************************ 00:22:28.588 12:06:33 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=133137 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 133137 /var/tmp/spdk-raid.sock 00:22:28.588 12:06:33 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:28.588 12:06:33 -- common/autotest_common.sh@829 -- # '[' -z 133137 ']' 00:22:28.588 12:06:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:28.588 12:06:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:28.588 12:06:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:28.588 12:06:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.588 12:06:33 -- common/autotest_common.sh@10 -- # set +x 00:22:28.588 [2024-11-29 12:06:33.975839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:28.588 [2024-11-29 12:06:33.976712] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133137 ] 00:22:28.846 [2024-11-29 12:06:34.121923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.846 [2024-11-29 12:06:34.217003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.846 [2024-11-29 12:06:34.272038] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.783 12:06:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.783 12:06:34 -- common/autotest_common.sh@862 -- # return 0 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.783 12:06:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:29.783 malloc1 00:22:29.783 12:06:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:30.044 [2024-11-29 12:06:35.465635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:30.044 [2024-11-29 12:06:35.465784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.044 [2024-11-29 12:06:35.465827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:30.044 [2024-11-29 12:06:35.465890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.044 [2024-11-29 12:06:35.468687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.044 [2024-11-29 12:06:35.468765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:30.044 pt1 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:30.044 12:06:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:30.302 malloc2 00:22:30.302 12:06:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:30.561 [2024-11-29 12:06:36.020884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:30.561 [2024-11-29 12:06:36.021002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.561 [2024-11-29 12:06:36.021050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:30.561 [2024-11-29 12:06:36.021104] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.561 [2024-11-29 12:06:36.023708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.561 [2024-11-29 12:06:36.023771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:30.561 pt2 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:30.561 12:06:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:30.819 malloc3 00:22:30.819 12:06:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:31.078 [2024-11-29 12:06:36.541751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:31.078 [2024-11-29 12:06:36.541870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.078 [2024-11-29 12:06:36.541918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:31.078 [2024-11-29 12:06:36.541967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.078 [2024-11-29 12:06:36.544577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.078 [2024-11-29 12:06:36.544648] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:31.078 pt3 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:31.078 12:06:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:31.337 malloc4 00:22:31.337 12:06:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:31.596 [2024-11-29 12:06:37.029070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:31.596 [2024-11-29 12:06:37.029198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.596 [2024-11-29 12:06:37.029241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:31.596 [2024-11-29 12:06:37.029299] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.596 [2024-11-29 12:06:37.031890] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.596 [2024-11-29 12:06:37.031955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:31.596 pt4 00:22:31.596 12:06:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:31.596 12:06:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:31.596 12:06:37 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:31.855 [2024-11-29 12:06:37.253226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:31.856 [2024-11-29 12:06:37.255538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:31.856 [2024-11-29 12:06:37.255636] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:31.856 [2024-11-29 12:06:37.255695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:31.856 [2024-11-29 12:06:37.255964] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:22:31.856 [2024-11-29 12:06:37.255991] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:31.856 [2024-11-29 12:06:37.256159] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:31.856 [2024-11-29 12:06:37.256628] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:22:31.856 [2024-11-29 12:06:37.256652] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:22:31.856 [2024-11-29 12:06:37.256831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.856 12:06:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.114 12:06:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.114 "name": "raid_bdev1", 00:22:32.114 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:32.114 "strip_size_kb": 0, 00:22:32.114 "state": "online", 00:22:32.114 "raid_level": "raid1", 00:22:32.114 "superblock": true, 00:22:32.114 "num_base_bdevs": 4, 00:22:32.114 "num_base_bdevs_discovered": 4, 00:22:32.114 "num_base_bdevs_operational": 4, 00:22:32.114 "base_bdevs_list": [ 00:22:32.114 { 00:22:32.114 "name": "pt1", 00:22:32.114 "uuid": "9ce9e92a-e32e-5b69-8ffe-a56ef6dc94d7", 00:22:32.114 "is_configured": true, 00:22:32.114 "data_offset": 2048, 00:22:32.114 "data_size": 63488 00:22:32.114 }, 00:22:32.114 { 00:22:32.114 "name": "pt2", 00:22:32.114 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:32.114 "is_configured": true, 00:22:32.114 "data_offset": 2048, 00:22:32.114 "data_size": 63488 00:22:32.114 }, 00:22:32.114 { 00:22:32.114 "name": "pt3", 00:22:32.115 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:32.115 "is_configured": true, 00:22:32.115 "data_offset": 2048, 00:22:32.115 "data_size": 63488 00:22:32.115 }, 00:22:32.115 { 00:22:32.115 "name": "pt4", 00:22:32.115 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:32.115 "is_configured": true, 00:22:32.115 "data_offset": 2048, 00:22:32.115 "data_size": 63488 00:22:32.115 } 00:22:32.115 ] 00:22:32.115 }' 00:22:32.115 12:06:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.115 12:06:37 -- common/autotest_common.sh@10 -- # set +x 00:22:32.681 12:06:38 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:32.681 12:06:38 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:32.939 [2024-11-29 12:06:38.417691] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.939 12:06:38 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3f931b0d-6fa1-4985-9a51-03c63b3985d4 00:22:32.939 12:06:38 -- bdev/bdev_raid.sh@380 -- # '[' -z 3f931b0d-6fa1-4985-9a51-03c63b3985d4 ']' 00:22:32.939 12:06:38 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:33.197 [2024-11-29 12:06:38.657427] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.197 [2024-11-29 12:06:38.657474] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.198 [2024-11-29 12:06:38.657612] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.198 [2024-11-29 12:06:38.657719] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.198 [2024-11-29 12:06:38.657732] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:22:33.198 12:06:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.198 12:06:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:33.456 12:06:38 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:33.456 12:06:38 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:33.456 12:06:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:33.456 12:06:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:33.715 12:06:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:33.715 12:06:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:33.973 12:06:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:33.973 12:06:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:34.232 12:06:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:34.232 12:06:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:34.490 12:06:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:34.490 12:06:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:34.748 12:06:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:34.748 12:06:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:34.748 12:06:40 -- common/autotest_common.sh@650 -- # local es=0 00:22:34.748 12:06:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:34.748 12:06:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.748 12:06:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.748 12:06:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.748 12:06:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.748 12:06:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.748 12:06:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.748 12:06:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.748 12:06:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:34.748 12:06:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:35.007 [2024-11-29 12:06:40.345702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:35.007 [2024-11-29 12:06:40.348309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:35.007 [2024-11-29 12:06:40.348562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:35.007 [2024-11-29 12:06:40.348804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:35.007 [2024-11-29 12:06:40.348913] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:35.007 [2024-11-29 12:06:40.349228] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:35.007 [2024-11-29 12:06:40.349397] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:35.007 [2024-11-29 12:06:40.349499] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:22:35.007 [2024-11-29 12:06:40.349587] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.007 [2024-11-29 12:06:40.349693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:22:35.007 request: 00:22:35.007 { 00:22:35.007 "name": "raid_bdev1", 00:22:35.007 "raid_level": "raid1", 00:22:35.007 "base_bdevs": [ 00:22:35.007 "malloc1", 00:22:35.007 "malloc2", 00:22:35.007 "malloc3", 00:22:35.007 "malloc4" 00:22:35.007 ], 00:22:35.007 "superblock": false, 00:22:35.007 "method": "bdev_raid_create", 00:22:35.007 "req_id": 1 00:22:35.007 } 00:22:35.007 Got JSON-RPC error response 00:22:35.007 response: 00:22:35.007 { 00:22:35.007 "code": -17, 00:22:35.007 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:35.007 } 00:22:35.007 12:06:40 -- common/autotest_common.sh@653 -- # es=1 00:22:35.007 12:06:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:35.007 12:06:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:35.007 12:06:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:35.007 12:06:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.007 12:06:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:35.266 12:06:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:35.266 12:06:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:35.266 12:06:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:35.525 [2024-11-29 12:06:40.814168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:35.525 [2024-11-29 12:06:40.814637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.525 [2024-11-29 12:06:40.814811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:35.525 [2024-11-29 12:06:40.814946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.525 [2024-11-29 12:06:40.817568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.525 [2024-11-29 12:06:40.817777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:35.525 [2024-11-29 12:06:40.817998] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:35.525 [2024-11-29 12:06:40.818188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:35.525 pt1 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.525 12:06:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.802 12:06:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.802 "name": "raid_bdev1", 00:22:35.802 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:35.802 "strip_size_kb": 0, 00:22:35.802 "state": "configuring", 00:22:35.802 "raid_level": "raid1", 00:22:35.802 "superblock": true, 00:22:35.802 "num_base_bdevs": 4, 00:22:35.802 "num_base_bdevs_discovered": 1, 00:22:35.802 "num_base_bdevs_operational": 4, 00:22:35.802 "base_bdevs_list": [ 00:22:35.802 { 00:22:35.802 "name": "pt1", 00:22:35.802 "uuid": "9ce9e92a-e32e-5b69-8ffe-a56ef6dc94d7", 00:22:35.802 "is_configured": true, 00:22:35.802 "data_offset": 2048, 00:22:35.802 "data_size": 63488 00:22:35.802 }, 00:22:35.802 { 00:22:35.802 "name": null, 00:22:35.802 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:35.802 "is_configured": false, 00:22:35.802 "data_offset": 2048, 00:22:35.802 "data_size": 63488 00:22:35.802 }, 00:22:35.802 { 00:22:35.802 "name": null, 00:22:35.802 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:35.802 "is_configured": false, 00:22:35.802 "data_offset": 2048, 00:22:35.802 "data_size": 63488 00:22:35.802 }, 00:22:35.802 { 00:22:35.802 "name": null, 00:22:35.802 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:35.802 "is_configured": false, 00:22:35.802 "data_offset": 2048, 00:22:35.802 "data_size": 63488 00:22:35.802 } 00:22:35.802 ] 00:22:35.802 }' 00:22:35.802 12:06:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.802 12:06:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.368 12:06:41 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:22:36.368 12:06:41 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:36.626 [2024-11-29 12:06:41.958837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:36.626 [2024-11-29 12:06:41.959268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.626 [2024-11-29 12:06:41.959364] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:36.626 [2024-11-29 12:06:41.959497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.626 [2024-11-29 12:06:41.960016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.626 [2024-11-29 12:06:41.960190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:36.626 [2024-11-29 12:06:41.960400] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:36.626 [2024-11-29 12:06:41.960570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:36.626 pt2 00:22:36.626 12:06:41 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:36.884 [2024-11-29 12:06:42.206906] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.884 12:06:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.143 12:06:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.143 "name": "raid_bdev1", 00:22:37.143 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:37.143 "strip_size_kb": 0, 00:22:37.143 "state": "configuring", 00:22:37.143 "raid_level": "raid1", 00:22:37.143 "superblock": true, 00:22:37.143 "num_base_bdevs": 4, 00:22:37.143 "num_base_bdevs_discovered": 1, 00:22:37.143 "num_base_bdevs_operational": 4, 00:22:37.143 "base_bdevs_list": [ 00:22:37.143 { 00:22:37.143 "name": "pt1", 00:22:37.143 "uuid": "9ce9e92a-e32e-5b69-8ffe-a56ef6dc94d7", 00:22:37.143 "is_configured": true, 00:22:37.143 "data_offset": 2048, 00:22:37.143 "data_size": 63488 00:22:37.143 }, 00:22:37.143 { 00:22:37.143 "name": null, 00:22:37.143 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:37.143 "is_configured": false, 00:22:37.143 "data_offset": 2048, 00:22:37.143 "data_size": 63488 00:22:37.143 }, 00:22:37.143 { 00:22:37.143 "name": null, 00:22:37.143 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:37.143 "is_configured": false, 00:22:37.143 "data_offset": 2048, 00:22:37.143 "data_size": 63488 00:22:37.143 }, 00:22:37.143 { 00:22:37.143 "name": null, 00:22:37.143 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:37.143 "is_configured": false, 00:22:37.143 "data_offset": 2048, 00:22:37.143 "data_size": 63488 00:22:37.143 } 00:22:37.143 ] 00:22:37.143 }' 00:22:37.143 12:06:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.143 12:06:42 -- common/autotest_common.sh@10 -- # set +x 00:22:37.710 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:37.710 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:37.710 12:06:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.968 [2024-11-29 12:06:43.327088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:37.968 [2024-11-29 12:06:43.327504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.968 [2024-11-29 12:06:43.327592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:37.968 [2024-11-29 12:06:43.327733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.968 [2024-11-29 12:06:43.328284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.968 [2024-11-29 12:06:43.328462] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:37.968 [2024-11-29 12:06:43.328663] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:37.968 [2024-11-29 12:06:43.328812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.968 pt2 00:22:37.968 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:37.968 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:37.968 12:06:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:38.227 [2024-11-29 12:06:43.607216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:38.227 [2024-11-29 12:06:43.607636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.227 [2024-11-29 12:06:43.607718] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:38.227 [2024-11-29 12:06:43.607860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.227 [2024-11-29 12:06:43.608390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.227 [2024-11-29 12:06:43.608574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:38.227 [2024-11-29 12:06:43.608805] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:38.227 [2024-11-29 12:06:43.608951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:38.227 pt3 00:22:38.227 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:38.227 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:38.227 12:06:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:38.485 [2024-11-29 12:06:43.835234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:38.485 [2024-11-29 12:06:43.835629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.485 [2024-11-29 12:06:43.835711] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:38.485 [2024-11-29 12:06:43.835851] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.485 [2024-11-29 12:06:43.836390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.485 [2024-11-29 12:06:43.836574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:38.485 [2024-11-29 12:06:43.836770] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:38.485 [2024-11-29 12:06:43.836899] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:38.485 [2024-11-29 12:06:43.837115] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:38.485 [2024-11-29 12:06:43.837226] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:38.485 [2024-11-29 12:06:43.837360] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:38.485 [2024-11-29 12:06:43.837885] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:38.485 [2024-11-29 12:06:43.838002] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:38.485 [2024-11-29 12:06:43.838228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.485 pt4 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.485 12:06:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.743 12:06:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.743 "name": "raid_bdev1", 00:22:38.743 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:38.743 "strip_size_kb": 0, 00:22:38.743 "state": "online", 00:22:38.743 "raid_level": "raid1", 00:22:38.743 "superblock": true, 00:22:38.743 "num_base_bdevs": 4, 00:22:38.743 "num_base_bdevs_discovered": 4, 00:22:38.743 "num_base_bdevs_operational": 4, 00:22:38.743 "base_bdevs_list": [ 00:22:38.743 { 00:22:38.743 "name": "pt1", 00:22:38.743 "uuid": "9ce9e92a-e32e-5b69-8ffe-a56ef6dc94d7", 00:22:38.743 "is_configured": true, 00:22:38.743 "data_offset": 2048, 00:22:38.743 "data_size": 63488 00:22:38.743 }, 00:22:38.743 { 00:22:38.743 "name": "pt2", 00:22:38.743 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:38.743 "is_configured": true, 00:22:38.743 "data_offset": 2048, 00:22:38.743 "data_size": 63488 00:22:38.743 }, 00:22:38.743 { 00:22:38.743 "name": "pt3", 00:22:38.743 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:38.743 "is_configured": true, 00:22:38.743 "data_offset": 2048, 00:22:38.743 "data_size": 63488 00:22:38.743 }, 00:22:38.743 { 00:22:38.743 "name": "pt4", 00:22:38.743 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:38.743 "is_configured": true, 00:22:38.743 "data_offset": 2048, 00:22:38.743 "data_size": 63488 00:22:38.743 } 00:22:38.743 ] 00:22:38.743 }' 00:22:38.743 12:06:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.743 12:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:39.311 12:06:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:39.311 12:06:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:39.569 [2024-11-29 12:06:44.971744] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.569 12:06:44 -- bdev/bdev_raid.sh@430 -- # '[' 3f931b0d-6fa1-4985-9a51-03c63b3985d4 '!=' 3f931b0d-6fa1-4985-9a51-03c63b3985d4 ']' 00:22:39.569 12:06:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:22:39.569 12:06:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:39.569 12:06:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:39.569 12:06:44 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:39.827 [2024-11-29 12:06:45.239604] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.827 12:06:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.083 12:06:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.083 "name": "raid_bdev1", 00:22:40.083 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:40.083 "strip_size_kb": 0, 00:22:40.083 "state": "online", 00:22:40.083 "raid_level": "raid1", 00:22:40.083 "superblock": true, 00:22:40.083 "num_base_bdevs": 4, 00:22:40.083 "num_base_bdevs_discovered": 3, 00:22:40.083 "num_base_bdevs_operational": 3, 00:22:40.083 "base_bdevs_list": [ 00:22:40.083 { 00:22:40.083 "name": null, 00:22:40.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.083 "is_configured": false, 00:22:40.083 "data_offset": 2048, 00:22:40.083 "data_size": 63488 00:22:40.083 }, 00:22:40.083 { 00:22:40.083 "name": "pt2", 00:22:40.083 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:40.083 "is_configured": true, 00:22:40.083 "data_offset": 2048, 00:22:40.083 "data_size": 63488 00:22:40.083 }, 00:22:40.083 { 00:22:40.083 "name": "pt3", 00:22:40.084 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:40.084 "is_configured": true, 00:22:40.084 "data_offset": 2048, 00:22:40.084 "data_size": 63488 00:22:40.084 }, 00:22:40.084 { 00:22:40.084 "name": "pt4", 00:22:40.084 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:40.084 "is_configured": true, 00:22:40.084 "data_offset": 2048, 00:22:40.084 "data_size": 63488 00:22:40.084 } 00:22:40.084 ] 00:22:40.084 }' 00:22:40.084 12:06:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.084 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:22:40.649 12:06:46 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:40.907 [2024-11-29 12:06:46.383795] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:40.907 [2024-11-29 12:06:46.383898] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.907 [2024-11-29 12:06:46.384017] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.907 [2024-11-29 12:06:46.384139] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.907 [2024-11-29 12:06:46.384381] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:40.907 12:06:46 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.907 12:06:46 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:41.472 12:06:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:41.731 12:06:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:41.731 12:06:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:41.731 12:06:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:41.989 12:06:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:41.989 12:06:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:41.989 12:06:47 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:41.989 12:06:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:41.989 12:06:47 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.246 [2024-11-29 12:06:47.586884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.246 [2024-11-29 12:06:47.587299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.247 [2024-11-29 12:06:47.587386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:42.247 [2024-11-29 12:06:47.587639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.247 [2024-11-29 12:06:47.590367] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.247 [2024-11-29 12:06:47.590576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.247 [2024-11-29 12:06:47.590823] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:42.247 [2024-11-29 12:06:47.590974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:42.247 pt2 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.247 12:06:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.504 12:06:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.504 "name": "raid_bdev1", 00:22:42.504 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:42.504 "strip_size_kb": 0, 00:22:42.504 "state": "configuring", 00:22:42.504 "raid_level": "raid1", 00:22:42.504 "superblock": true, 00:22:42.504 "num_base_bdevs": 4, 00:22:42.504 "num_base_bdevs_discovered": 1, 00:22:42.504 "num_base_bdevs_operational": 3, 00:22:42.504 "base_bdevs_list": [ 00:22:42.504 { 00:22:42.504 "name": null, 00:22:42.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.504 "is_configured": false, 00:22:42.504 "data_offset": 2048, 00:22:42.504 "data_size": 63488 00:22:42.504 }, 00:22:42.504 { 00:22:42.504 "name": "pt2", 00:22:42.504 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:42.504 "is_configured": true, 00:22:42.504 "data_offset": 2048, 00:22:42.504 "data_size": 63488 00:22:42.504 }, 00:22:42.504 { 00:22:42.504 "name": null, 00:22:42.504 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:42.504 "is_configured": false, 00:22:42.504 "data_offset": 2048, 00:22:42.504 "data_size": 63488 00:22:42.504 }, 00:22:42.504 { 00:22:42.504 "name": null, 00:22:42.505 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:42.505 "is_configured": false, 00:22:42.505 "data_offset": 2048, 00:22:42.505 "data_size": 63488 00:22:42.505 } 00:22:42.505 ] 00:22:42.505 }' 00:22:42.505 12:06:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.505 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.072 12:06:48 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:43.072 12:06:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:43.072 12:06:48 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:43.330 [2024-11-29 12:06:48.710994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:43.330 [2024-11-29 12:06:48.711381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.330 [2024-11-29 12:06:48.711483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:43.330 [2024-11-29 12:06:48.711779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.330 [2024-11-29 12:06:48.712392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.330 [2024-11-29 12:06:48.712591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:43.330 [2024-11-29 12:06:48.712825] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:43.330 [2024-11-29 12:06:48.712964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:43.330 pt3 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.330 12:06:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.589 12:06:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.589 "name": "raid_bdev1", 00:22:43.589 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:43.589 "strip_size_kb": 0, 00:22:43.589 "state": "configuring", 00:22:43.589 "raid_level": "raid1", 00:22:43.589 "superblock": true, 00:22:43.589 "num_base_bdevs": 4, 00:22:43.589 "num_base_bdevs_discovered": 2, 00:22:43.589 "num_base_bdevs_operational": 3, 00:22:43.589 "base_bdevs_list": [ 00:22:43.589 { 00:22:43.589 "name": null, 00:22:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.589 "is_configured": false, 00:22:43.589 "data_offset": 2048, 00:22:43.589 "data_size": 63488 00:22:43.589 }, 00:22:43.589 { 00:22:43.589 "name": "pt2", 00:22:43.589 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:43.589 "is_configured": true, 00:22:43.589 "data_offset": 2048, 00:22:43.589 "data_size": 63488 00:22:43.589 }, 00:22:43.589 { 00:22:43.589 "name": "pt3", 00:22:43.589 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:43.589 "is_configured": true, 00:22:43.589 "data_offset": 2048, 00:22:43.589 "data_size": 63488 00:22:43.589 }, 00:22:43.589 { 00:22:43.589 "name": null, 00:22:43.589 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:43.589 "is_configured": false, 00:22:43.589 "data_offset": 2048, 00:22:43.589 "data_size": 63488 00:22:43.589 } 00:22:43.589 ] 00:22:43.589 }' 00:22:43.589 12:06:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.589 12:06:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.154 12:06:49 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:44.154 12:06:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:44.154 12:06:49 -- bdev/bdev_raid.sh@462 -- # i=3 00:22:44.154 12:06:49 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:44.412 [2024-11-29 12:06:49.815244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:44.412 [2024-11-29 12:06:49.815576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.412 [2024-11-29 12:06:49.815767] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:44.412 [2024-11-29 12:06:49.815927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.412 [2024-11-29 12:06:49.816469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.412 [2024-11-29 12:06:49.816641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:44.412 [2024-11-29 12:06:49.816853] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:44.412 [2024-11-29 12:06:49.816995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:44.412 [2024-11-29 12:06:49.817183] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:22:44.412 [2024-11-29 12:06:49.817297] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:44.412 [2024-11-29 12:06:49.817492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:22:44.412 [2024-11-29 12:06:49.817979] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:22:44.412 [2024-11-29 12:06:49.818120] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:22:44.412 [2024-11-29 12:06:49.818376] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.412 pt4 00:22:44.412 12:06:49 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:44.412 12:06:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.412 12:06:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.412 12:06:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.412 12:06:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.413 12:06:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.670 12:06:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.670 "name": "raid_bdev1", 00:22:44.670 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:44.670 "strip_size_kb": 0, 00:22:44.670 "state": "online", 00:22:44.670 "raid_level": "raid1", 00:22:44.670 "superblock": true, 00:22:44.670 "num_base_bdevs": 4, 00:22:44.670 "num_base_bdevs_discovered": 3, 00:22:44.670 "num_base_bdevs_operational": 3, 00:22:44.670 "base_bdevs_list": [ 00:22:44.670 { 00:22:44.670 "name": null, 00:22:44.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.670 "is_configured": false, 00:22:44.670 "data_offset": 2048, 00:22:44.670 "data_size": 63488 00:22:44.670 }, 00:22:44.670 { 00:22:44.670 "name": "pt2", 00:22:44.670 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:44.670 "is_configured": true, 00:22:44.670 "data_offset": 2048, 00:22:44.670 "data_size": 63488 00:22:44.670 }, 00:22:44.670 { 00:22:44.670 "name": "pt3", 00:22:44.670 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:44.670 "is_configured": true, 00:22:44.670 "data_offset": 2048, 00:22:44.670 "data_size": 63488 00:22:44.670 }, 00:22:44.670 { 00:22:44.670 "name": "pt4", 00:22:44.670 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:44.671 "is_configured": true, 00:22:44.671 "data_offset": 2048, 00:22:44.671 "data_size": 63488 00:22:44.671 } 00:22:44.671 ] 00:22:44.671 }' 00:22:44.671 12:06:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.671 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:22:45.237 12:06:50 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:22:45.237 12:06:50 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:45.495 [2024-11-29 12:06:50.939484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.495 [2024-11-29 12:06:50.939815] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.495 [2024-11-29 12:06:50.940005] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.495 [2024-11-29 12:06:50.940210] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.495 [2024-11-29 12:06:50.940334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:22:45.495 12:06:50 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.495 12:06:50 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:45.753 12:06:51 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:45.753 12:06:51 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:45.753 12:06:51 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:46.011 [2024-11-29 12:06:51.434793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:46.011 [2024-11-29 12:06:51.435622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.011 [2024-11-29 12:06:51.435925] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:46.011 [2024-11-29 12:06:51.436201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.011 [2024-11-29 12:06:51.439082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.011 [2024-11-29 12:06:51.439400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:46.011 [2024-11-29 12:06:51.439747] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:46.011 [2024-11-29 12:06:51.439922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:46.011 pt1 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.011 12:06:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.270 12:06:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.271 "name": "raid_bdev1", 00:22:46.271 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:46.271 "strip_size_kb": 0, 00:22:46.271 "state": "configuring", 00:22:46.271 "raid_level": "raid1", 00:22:46.271 "superblock": true, 00:22:46.271 "num_base_bdevs": 4, 00:22:46.271 "num_base_bdevs_discovered": 1, 00:22:46.271 "num_base_bdevs_operational": 4, 00:22:46.271 "base_bdevs_list": [ 00:22:46.271 { 00:22:46.271 "name": "pt1", 00:22:46.271 "uuid": "9ce9e92a-e32e-5b69-8ffe-a56ef6dc94d7", 00:22:46.271 "is_configured": true, 00:22:46.271 "data_offset": 2048, 00:22:46.271 "data_size": 63488 00:22:46.271 }, 00:22:46.271 { 00:22:46.271 "name": null, 00:22:46.271 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:46.271 "is_configured": false, 00:22:46.271 "data_offset": 2048, 00:22:46.271 "data_size": 63488 00:22:46.271 }, 00:22:46.271 { 00:22:46.271 "name": null, 00:22:46.271 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:46.271 "is_configured": false, 00:22:46.271 "data_offset": 2048, 00:22:46.271 "data_size": 63488 00:22:46.271 }, 00:22:46.271 { 00:22:46.271 "name": null, 00:22:46.271 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:46.271 "is_configured": false, 00:22:46.271 "data_offset": 2048, 00:22:46.271 "data_size": 63488 00:22:46.271 } 00:22:46.271 ] 00:22:46.271 }' 00:22:46.271 12:06:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.271 12:06:51 -- common/autotest_common.sh@10 -- # set +x 00:22:47.206 12:06:52 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:47.206 12:06:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:47.206 12:06:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:47.206 12:06:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:47.206 12:06:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:47.207 12:06:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:47.465 12:06:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:47.465 12:06:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:47.465 12:06:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:47.724 12:06:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:47.724 12:06:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:47.724 12:06:53 -- bdev/bdev_raid.sh@489 -- # i=3 00:22:47.724 12:06:53 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:47.983 [2024-11-29 12:06:53.344312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:47.983 [2024-11-29 12:06:53.345038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.983 [2024-11-29 12:06:53.345342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:47.983 [2024-11-29 12:06:53.345611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.983 [2024-11-29 12:06:53.346404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.983 [2024-11-29 12:06:53.346705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:47.983 [2024-11-29 12:06:53.347054] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:47.983 [2024-11-29 12:06:53.347190] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:47.983 [2024-11-29 12:06:53.347297] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:47.983 [2024-11-29 12:06:53.347368] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:22:47.983 [2024-11-29 12:06:53.347548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:47.983 pt4 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.983 12:06:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.241 12:06:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.241 "name": "raid_bdev1", 00:22:48.241 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:48.241 "strip_size_kb": 0, 00:22:48.241 "state": "configuring", 00:22:48.241 "raid_level": "raid1", 00:22:48.241 "superblock": true, 00:22:48.241 "num_base_bdevs": 4, 00:22:48.241 "num_base_bdevs_discovered": 1, 00:22:48.241 "num_base_bdevs_operational": 3, 00:22:48.241 "base_bdevs_list": [ 00:22:48.241 { 00:22:48.241 "name": null, 00:22:48.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.241 "is_configured": false, 00:22:48.241 "data_offset": 2048, 00:22:48.241 "data_size": 63488 00:22:48.241 }, 00:22:48.241 { 00:22:48.241 "name": null, 00:22:48.241 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:48.241 "is_configured": false, 00:22:48.241 "data_offset": 2048, 00:22:48.241 "data_size": 63488 00:22:48.241 }, 00:22:48.241 { 00:22:48.241 "name": null, 00:22:48.241 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:48.241 "is_configured": false, 00:22:48.241 "data_offset": 2048, 00:22:48.241 "data_size": 63488 00:22:48.241 }, 00:22:48.241 { 00:22:48.241 "name": "pt4", 00:22:48.241 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:48.241 "is_configured": true, 00:22:48.241 "data_offset": 2048, 00:22:48.241 "data_size": 63488 00:22:48.241 } 00:22:48.241 ] 00:22:48.241 }' 00:22:48.241 12:06:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.242 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:48.808 12:06:54 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:48.808 12:06:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:48.808 12:06:54 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:49.067 [2024-11-29 12:06:54.508573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:49.067 [2024-11-29 12:06:54.509275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.067 [2024-11-29 12:06:54.509557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:49.067 [2024-11-29 12:06:54.509835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.067 [2024-11-29 12:06:54.510615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.067 [2024-11-29 12:06:54.510927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:49.067 [2024-11-29 12:06:54.511265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:49.067 [2024-11-29 12:06:54.511414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:49.067 pt2 00:22:49.067 12:06:54 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:49.067 12:06:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:49.067 12:06:54 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:49.326 [2024-11-29 12:06:54.736603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:49.326 [2024-11-29 12:06:54.737196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.326 [2024-11-29 12:06:54.737475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:49.326 [2024-11-29 12:06:54.737739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.326 [2024-11-29 12:06:54.738503] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.326 [2024-11-29 12:06:54.738790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:49.326 [2024-11-29 12:06:54.739101] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:49.326 [2024-11-29 12:06:54.739238] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:49.326 [2024-11-29 12:06:54.739439] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:49.326 [2024-11-29 12:06:54.739541] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:49.326 [2024-11-29 12:06:54.739729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:22:49.326 [2024-11-29 12:06:54.740180] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:49.326 [2024-11-29 12:06:54.740293] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:49.326 [2024-11-29 12:06:54.740594] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.326 pt3 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.326 12:06:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.585 12:06:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.585 "name": "raid_bdev1", 00:22:49.585 "uuid": "3f931b0d-6fa1-4985-9a51-03c63b3985d4", 00:22:49.585 "strip_size_kb": 0, 00:22:49.585 "state": "online", 00:22:49.585 "raid_level": "raid1", 00:22:49.585 "superblock": true, 00:22:49.585 "num_base_bdevs": 4, 00:22:49.585 "num_base_bdevs_discovered": 3, 00:22:49.585 "num_base_bdevs_operational": 3, 00:22:49.585 "base_bdevs_list": [ 00:22:49.585 { 00:22:49.585 "name": null, 00:22:49.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.585 "is_configured": false, 00:22:49.585 "data_offset": 2048, 00:22:49.585 "data_size": 63488 00:22:49.585 }, 00:22:49.585 { 00:22:49.585 "name": "pt2", 00:22:49.585 "uuid": "49e95802-164a-541f-bcc0-e342851f633a", 00:22:49.585 "is_configured": true, 00:22:49.585 "data_offset": 2048, 00:22:49.585 "data_size": 63488 00:22:49.585 }, 00:22:49.585 { 00:22:49.585 "name": "pt3", 00:22:49.585 "uuid": "0aebc6fc-a10e-52b0-b24e-ba7faf71d96a", 00:22:49.585 "is_configured": true, 00:22:49.585 "data_offset": 2048, 00:22:49.585 "data_size": 63488 00:22:49.585 }, 00:22:49.585 { 00:22:49.585 "name": "pt4", 00:22:49.585 "uuid": "060d0480-5dbf-5110-91b2-855e1e8b32eb", 00:22:49.585 "is_configured": true, 00:22:49.585 "data_offset": 2048, 00:22:49.585 "data_size": 63488 00:22:49.585 } 00:22:49.585 ] 00:22:49.585 }' 00:22:49.585 12:06:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.585 12:06:55 -- common/autotest_common.sh@10 -- # set +x 00:22:50.151 12:06:55 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:50.151 12:06:55 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:50.409 [2024-11-29 12:06:55.909172] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.668 12:06:55 -- bdev/bdev_raid.sh@506 -- # '[' 3f931b0d-6fa1-4985-9a51-03c63b3985d4 '!=' 3f931b0d-6fa1-4985-9a51-03c63b3985d4 ']' 00:22:50.668 12:06:55 -- bdev/bdev_raid.sh@511 -- # killprocess 133137 00:22:50.668 12:06:55 -- common/autotest_common.sh@936 -- # '[' -z 133137 ']' 00:22:50.668 12:06:55 -- common/autotest_common.sh@940 -- # kill -0 133137 00:22:50.668 12:06:55 -- common/autotest_common.sh@941 -- # uname 00:22:50.668 12:06:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.668 12:06:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133137 00:22:50.668 killing process with pid 133137 00:22:50.668 12:06:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:50.668 12:06:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:50.668 12:06:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133137' 00:22:50.668 12:06:55 -- common/autotest_common.sh@955 -- # kill 133137 00:22:50.668 12:06:55 -- common/autotest_common.sh@960 -- # wait 133137 00:22:50.668 [2024-11-29 12:06:55.953572] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:50.668 [2024-11-29 12:06:55.953675] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.668 [2024-11-29 12:06:55.953760] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:50.668 [2024-11-29 12:06:55.953772] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:50.668 [2024-11-29 12:06:56.006656] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.939 ************************************ 00:22:50.939 END TEST raid_superblock_test 00:22:50.939 ************************************ 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:50.939 00:22:50.939 real 0m22.331s 00:22:50.939 user 0m41.887s 00:22:50.939 sys 0m2.764s 00:22:50.939 12:06:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:50.939 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:22:50.939 12:06:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:50.939 12:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:50.939 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:22:50.939 ************************************ 00:22:50.939 START TEST raid_rebuild_test 00:22:50.939 ************************************ 00:22:50.939 12:06:56 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:50.939 12:06:56 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:50.940 12:06:56 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:50.940 12:06:56 -- bdev/bdev_raid.sh@544 -- # raid_pid=133820 00:22:50.940 12:06:56 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133820 /var/tmp/spdk-raid.sock 00:22:50.940 12:06:56 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:50.940 12:06:56 -- common/autotest_common.sh@829 -- # '[' -z 133820 ']' 00:22:50.940 12:06:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:50.940 12:06:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.940 12:06:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:50.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:50.940 12:06:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.940 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:22:50.940 [2024-11-29 12:06:56.375070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:50.940 [2024-11-29 12:06:56.375527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133820 ] 00:22:50.940 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:50.940 Zero copy mechanism will not be used. 00:22:51.213 [2024-11-29 12:06:56.512520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.213 [2024-11-29 12:06:56.607585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.213 [2024-11-29 12:06:56.661794] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:52.150 12:06:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.150 12:06:57 -- common/autotest_common.sh@862 -- # return 0 00:22:52.150 12:06:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:52.150 12:06:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:52.150 12:06:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:52.150 BaseBdev1 00:22:52.150 12:06:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:52.150 12:06:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:52.150 12:06:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:52.715 BaseBdev2 00:22:52.715 12:06:57 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:52.715 spare_malloc 00:22:52.715 12:06:58 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:52.973 spare_delay 00:22:52.973 12:06:58 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:53.232 [2024-11-29 12:06:58.677713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.232 [2024-11-29 12:06:58.678198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.232 [2024-11-29 12:06:58.678448] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:53.232 [2024-11-29 12:06:58.678632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.232 [2024-11-29 12:06:58.681582] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.232 [2024-11-29 12:06:58.681780] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.232 spare 00:22:53.232 12:06:58 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:53.490 [2024-11-29 12:06:58.930291] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:53.490 [2024-11-29 12:06:58.932901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.490 [2024-11-29 12:06:58.933165] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:22:53.490 [2024-11-29 12:06:58.933217] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:53.490 [2024-11-29 12:06:58.933543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:53.490 [2024-11-29 12:06:58.934116] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:22:53.490 [2024-11-29 12:06:58.934248] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:22:53.490 [2024-11-29 12:06:58.934622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.490 12:06:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.749 12:06:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.749 "name": "raid_bdev1", 00:22:53.749 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:22:53.749 "strip_size_kb": 0, 00:22:53.749 "state": "online", 00:22:53.749 "raid_level": "raid1", 00:22:53.749 "superblock": false, 00:22:53.749 "num_base_bdevs": 2, 00:22:53.749 "num_base_bdevs_discovered": 2, 00:22:53.749 "num_base_bdevs_operational": 2, 00:22:53.749 "base_bdevs_list": [ 00:22:53.749 { 00:22:53.749 "name": "BaseBdev1", 00:22:53.749 "uuid": "4ae869f6-d509-4712-8934-9e91f7945577", 00:22:53.749 "is_configured": true, 00:22:53.749 "data_offset": 0, 00:22:53.749 "data_size": 65536 00:22:53.749 }, 00:22:53.749 { 00:22:53.749 "name": "BaseBdev2", 00:22:53.749 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:22:53.749 "is_configured": true, 00:22:53.749 "data_offset": 0, 00:22:53.749 "data_size": 65536 00:22:53.749 } 00:22:53.749 ] 00:22:53.749 }' 00:22:53.749 12:06:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.749 12:06:59 -- common/autotest_common.sh@10 -- # set +x 00:22:54.314 12:06:59 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:54.314 12:06:59 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:54.572 [2024-11-29 12:07:00.067101] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.831 12:07:00 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:54.831 12:07:00 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.831 12:07:00 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:55.089 12:07:00 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:55.089 12:07:00 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:55.089 12:07:00 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:55.089 12:07:00 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@12 -- # local i 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.089 12:07:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:55.089 [2024-11-29 12:07:00.599032] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:22:55.348 /dev/nbd0 00:22:55.348 12:07:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.348 12:07:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.348 12:07:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:55.348 12:07:00 -- common/autotest_common.sh@867 -- # local i 00:22:55.348 12:07:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:55.348 12:07:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:55.348 12:07:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:55.348 12:07:00 -- common/autotest_common.sh@871 -- # break 00:22:55.348 12:07:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:55.348 12:07:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:55.348 12:07:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.348 1+0 records in 00:22:55.348 1+0 records out 00:22:55.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578961 s, 7.1 MB/s 00:22:55.348 12:07:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.348 12:07:00 -- common/autotest_common.sh@884 -- # size=4096 00:22:55.348 12:07:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.348 12:07:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:55.348 12:07:00 -- common/autotest_common.sh@887 -- # return 0 00:22:55.348 12:07:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.348 12:07:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.348 12:07:00 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:55.348 12:07:00 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:55.348 12:07:00 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:00.672 65536+0 records in 00:23:00.672 65536+0 records out 00:23:00.672 33554432 bytes (34 MB, 32 MiB) copied, 4.82771 s, 7.0 MB/s 00:23:00.672 12:07:05 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@51 -- # local i 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:00.672 [2024-11-29 12:07:05.771609] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@41 -- # break 00:23:00.672 12:07:05 -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.672 12:07:05 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:00.672 [2024-11-29 12:07:05.999399] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.672 12:07:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.930 12:07:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.930 "name": "raid_bdev1", 00:23:00.930 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:00.930 "strip_size_kb": 0, 00:23:00.930 "state": "online", 00:23:00.930 "raid_level": "raid1", 00:23:00.930 "superblock": false, 00:23:00.930 "num_base_bdevs": 2, 00:23:00.930 "num_base_bdevs_discovered": 1, 00:23:00.930 "num_base_bdevs_operational": 1, 00:23:00.930 "base_bdevs_list": [ 00:23:00.930 { 00:23:00.930 "name": null, 00:23:00.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.930 "is_configured": false, 00:23:00.930 "data_offset": 0, 00:23:00.930 "data_size": 65536 00:23:00.930 }, 00:23:00.930 { 00:23:00.930 "name": "BaseBdev2", 00:23:00.930 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:00.930 "is_configured": true, 00:23:00.930 "data_offset": 0, 00:23:00.930 "data_size": 65536 00:23:00.930 } 00:23:00.930 ] 00:23:00.930 }' 00:23:00.930 12:07:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.930 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:23:01.496 12:07:06 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.754 [2024-11-29 12:07:07.087589] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:01.754 [2024-11-29 12:07:07.087956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.754 [2024-11-29 12:07:07.093496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d05ee0 00:23:01.754 [2024-11-29 12:07:07.096053] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.754 12:07:07 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.686 12:07:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.945 12:07:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:02.945 "name": "raid_bdev1", 00:23:02.945 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:02.945 "strip_size_kb": 0, 00:23:02.945 "state": "online", 00:23:02.945 "raid_level": "raid1", 00:23:02.945 "superblock": false, 00:23:02.945 "num_base_bdevs": 2, 00:23:02.945 "num_base_bdevs_discovered": 2, 00:23:02.945 "num_base_bdevs_operational": 2, 00:23:02.945 "process": { 00:23:02.945 "type": "rebuild", 00:23:02.945 "target": "spare", 00:23:02.945 "progress": { 00:23:02.945 "blocks": 24576, 00:23:02.945 "percent": 37 00:23:02.945 } 00:23:02.945 }, 00:23:02.945 "base_bdevs_list": [ 00:23:02.945 { 00:23:02.945 "name": "spare", 00:23:02.945 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:02.945 "is_configured": true, 00:23:02.945 "data_offset": 0, 00:23:02.945 "data_size": 65536 00:23:02.945 }, 00:23:02.945 { 00:23:02.945 "name": "BaseBdev2", 00:23:02.945 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:02.945 "is_configured": true, 00:23:02.945 "data_offset": 0, 00:23:02.945 "data_size": 65536 00:23:02.945 } 00:23:02.945 ] 00:23:02.945 }' 00:23:02.945 12:07:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:02.945 12:07:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.945 12:07:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:02.945 12:07:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.945 12:07:08 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:03.203 [2024-11-29 12:07:08.715101] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.461 [2024-11-29 12:07:08.808373] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.461 [2024-11-29 12:07:08.808742] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.461 12:07:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.718 12:07:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.718 "name": "raid_bdev1", 00:23:03.718 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:03.718 "strip_size_kb": 0, 00:23:03.718 "state": "online", 00:23:03.718 "raid_level": "raid1", 00:23:03.718 "superblock": false, 00:23:03.718 "num_base_bdevs": 2, 00:23:03.718 "num_base_bdevs_discovered": 1, 00:23:03.718 "num_base_bdevs_operational": 1, 00:23:03.718 "base_bdevs_list": [ 00:23:03.718 { 00:23:03.718 "name": null, 00:23:03.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.718 "is_configured": false, 00:23:03.718 "data_offset": 0, 00:23:03.718 "data_size": 65536 00:23:03.718 }, 00:23:03.718 { 00:23:03.718 "name": "BaseBdev2", 00:23:03.718 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:03.718 "is_configured": true, 00:23:03.718 "data_offset": 0, 00:23:03.718 "data_size": 65536 00:23:03.718 } 00:23:03.718 ] 00:23:03.718 }' 00:23:03.718 12:07:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.718 12:07:09 -- common/autotest_common.sh@10 -- # set +x 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.285 12:07:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.543 12:07:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:04.543 "name": "raid_bdev1", 00:23:04.543 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:04.543 "strip_size_kb": 0, 00:23:04.543 "state": "online", 00:23:04.543 "raid_level": "raid1", 00:23:04.543 "superblock": false, 00:23:04.543 "num_base_bdevs": 2, 00:23:04.543 "num_base_bdevs_discovered": 1, 00:23:04.543 "num_base_bdevs_operational": 1, 00:23:04.543 "base_bdevs_list": [ 00:23:04.543 { 00:23:04.543 "name": null, 00:23:04.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.543 "is_configured": false, 00:23:04.543 "data_offset": 0, 00:23:04.543 "data_size": 65536 00:23:04.543 }, 00:23:04.543 { 00:23:04.543 "name": "BaseBdev2", 00:23:04.543 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:04.543 "is_configured": true, 00:23:04.543 "data_offset": 0, 00:23:04.543 "data_size": 65536 00:23:04.543 } 00:23:04.543 ] 00:23:04.543 }' 00:23:04.543 12:07:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.543 12:07:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.543 12:07:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:04.543 12:07:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:04.543 12:07:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.801 [2024-11-29 12:07:10.306931] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:04.801 [2024-11-29 12:07:10.307277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.801 [2024-11-29 12:07:10.312667] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:23:04.801 [2024-11-29 12:07:10.315212] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:05.060 12:07:10 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.996 12:07:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.254 "name": "raid_bdev1", 00:23:06.254 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:06.254 "strip_size_kb": 0, 00:23:06.254 "state": "online", 00:23:06.254 "raid_level": "raid1", 00:23:06.254 "superblock": false, 00:23:06.254 "num_base_bdevs": 2, 00:23:06.254 "num_base_bdevs_discovered": 2, 00:23:06.254 "num_base_bdevs_operational": 2, 00:23:06.254 "process": { 00:23:06.254 "type": "rebuild", 00:23:06.254 "target": "spare", 00:23:06.254 "progress": { 00:23:06.254 "blocks": 24576, 00:23:06.254 "percent": 37 00:23:06.254 } 00:23:06.254 }, 00:23:06.254 "base_bdevs_list": [ 00:23:06.254 { 00:23:06.254 "name": "spare", 00:23:06.254 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:06.254 "is_configured": true, 00:23:06.254 "data_offset": 0, 00:23:06.254 "data_size": 65536 00:23:06.254 }, 00:23:06.254 { 00:23:06.254 "name": "BaseBdev2", 00:23:06.254 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:06.254 "is_configured": true, 00:23:06.254 "data_offset": 0, 00:23:06.254 "data_size": 65536 00:23:06.254 } 00:23:06.254 ] 00:23:06.254 }' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@657 -- # local timeout=405 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.254 12:07:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.513 12:07:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.513 "name": "raid_bdev1", 00:23:06.513 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:06.513 "strip_size_kb": 0, 00:23:06.513 "state": "online", 00:23:06.513 "raid_level": "raid1", 00:23:06.513 "superblock": false, 00:23:06.513 "num_base_bdevs": 2, 00:23:06.513 "num_base_bdevs_discovered": 2, 00:23:06.513 "num_base_bdevs_operational": 2, 00:23:06.513 "process": { 00:23:06.513 "type": "rebuild", 00:23:06.513 "target": "spare", 00:23:06.513 "progress": { 00:23:06.513 "blocks": 32768, 00:23:06.513 "percent": 50 00:23:06.513 } 00:23:06.513 }, 00:23:06.513 "base_bdevs_list": [ 00:23:06.513 { 00:23:06.513 "name": "spare", 00:23:06.513 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:06.513 "is_configured": true, 00:23:06.513 "data_offset": 0, 00:23:06.513 "data_size": 65536 00:23:06.513 }, 00:23:06.513 { 00:23:06.513 "name": "BaseBdev2", 00:23:06.513 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:06.513 "is_configured": true, 00:23:06.513 "data_offset": 0, 00:23:06.513 "data_size": 65536 00:23:06.513 } 00:23:06.513 ] 00:23:06.513 }' 00:23:06.513 12:07:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.771 12:07:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.771 12:07:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.771 12:07:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.771 12:07:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.706 12:07:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.965 12:07:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:07.965 "name": "raid_bdev1", 00:23:07.965 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:07.965 "strip_size_kb": 0, 00:23:07.965 "state": "online", 00:23:07.965 "raid_level": "raid1", 00:23:07.965 "superblock": false, 00:23:07.965 "num_base_bdevs": 2, 00:23:07.965 "num_base_bdevs_discovered": 2, 00:23:07.965 "num_base_bdevs_operational": 2, 00:23:07.965 "process": { 00:23:07.965 "type": "rebuild", 00:23:07.965 "target": "spare", 00:23:07.965 "progress": { 00:23:07.965 "blocks": 59392, 00:23:07.965 "percent": 90 00:23:07.965 } 00:23:07.965 }, 00:23:07.965 "base_bdevs_list": [ 00:23:07.965 { 00:23:07.965 "name": "spare", 00:23:07.965 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:07.965 "is_configured": true, 00:23:07.965 "data_offset": 0, 00:23:07.965 "data_size": 65536 00:23:07.965 }, 00:23:07.965 { 00:23:07.965 "name": "BaseBdev2", 00:23:07.965 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:07.965 "is_configured": true, 00:23:07.965 "data_offset": 0, 00:23:07.965 "data_size": 65536 00:23:07.965 } 00:23:07.965 ] 00:23:07.965 }' 00:23:07.965 12:07:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:07.965 12:07:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.965 12:07:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:07.965 12:07:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.965 12:07:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:08.245 [2024-11-29 12:07:13.536046] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:08.245 [2024-11-29 12:07:13.536358] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:08.245 [2024-11-29 12:07:13.536579] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.179 12:07:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.436 "name": "raid_bdev1", 00:23:09.436 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:09.436 "strip_size_kb": 0, 00:23:09.436 "state": "online", 00:23:09.436 "raid_level": "raid1", 00:23:09.436 "superblock": false, 00:23:09.436 "num_base_bdevs": 2, 00:23:09.436 "num_base_bdevs_discovered": 2, 00:23:09.436 "num_base_bdevs_operational": 2, 00:23:09.436 "base_bdevs_list": [ 00:23:09.436 { 00:23:09.436 "name": "spare", 00:23:09.436 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:09.436 "is_configured": true, 00:23:09.436 "data_offset": 0, 00:23:09.436 "data_size": 65536 00:23:09.436 }, 00:23:09.436 { 00:23:09.436 "name": "BaseBdev2", 00:23:09.436 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:09.436 "is_configured": true, 00:23:09.436 "data_offset": 0, 00:23:09.436 "data_size": 65536 00:23:09.436 } 00:23:09.436 ] 00:23:09.436 }' 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@660 -- # break 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.436 12:07:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.694 "name": "raid_bdev1", 00:23:09.694 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:09.694 "strip_size_kb": 0, 00:23:09.694 "state": "online", 00:23:09.694 "raid_level": "raid1", 00:23:09.694 "superblock": false, 00:23:09.694 "num_base_bdevs": 2, 00:23:09.694 "num_base_bdevs_discovered": 2, 00:23:09.694 "num_base_bdevs_operational": 2, 00:23:09.694 "base_bdevs_list": [ 00:23:09.694 { 00:23:09.694 "name": "spare", 00:23:09.694 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:09.694 "is_configured": true, 00:23:09.694 "data_offset": 0, 00:23:09.694 "data_size": 65536 00:23:09.694 }, 00:23:09.694 { 00:23:09.694 "name": "BaseBdev2", 00:23:09.694 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:09.694 "is_configured": true, 00:23:09.694 "data_offset": 0, 00:23:09.694 "data_size": 65536 00:23:09.694 } 00:23:09.694 ] 00:23:09.694 }' 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.694 12:07:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.951 12:07:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.951 "name": "raid_bdev1", 00:23:09.951 "uuid": "00525cf5-4001-4531-b3e6-7a8bc6123a54", 00:23:09.951 "strip_size_kb": 0, 00:23:09.951 "state": "online", 00:23:09.951 "raid_level": "raid1", 00:23:09.951 "superblock": false, 00:23:09.951 "num_base_bdevs": 2, 00:23:09.951 "num_base_bdevs_discovered": 2, 00:23:09.951 "num_base_bdevs_operational": 2, 00:23:09.951 "base_bdevs_list": [ 00:23:09.951 { 00:23:09.951 "name": "spare", 00:23:09.951 "uuid": "9bcbfcbd-3ddd-5358-9df7-dbb067140322", 00:23:09.951 "is_configured": true, 00:23:09.951 "data_offset": 0, 00:23:09.951 "data_size": 65536 00:23:09.951 }, 00:23:09.951 { 00:23:09.951 "name": "BaseBdev2", 00:23:09.951 "uuid": "e2219c64-6c94-446e-9f23-0d7bf727a68e", 00:23:09.951 "is_configured": true, 00:23:09.951 "data_offset": 0, 00:23:09.951 "data_size": 65536 00:23:09.951 } 00:23:09.951 ] 00:23:09.951 }' 00:23:09.951 12:07:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.951 12:07:15 -- common/autotest_common.sh@10 -- # set +x 00:23:10.886 12:07:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:10.886 [2024-11-29 12:07:16.294868] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:10.886 [2024-11-29 12:07:16.295182] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.886 [2024-11-29 12:07:16.295436] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.886 [2024-11-29 12:07:16.295648] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.886 [2024-11-29 12:07:16.295781] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:23:10.886 12:07:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:10.886 12:07:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.145 12:07:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:11.145 12:07:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:11.145 12:07:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@12 -- # local i 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.145 12:07:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:11.404 /dev/nbd0 00:23:11.404 12:07:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:11.404 12:07:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:11.404 12:07:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:11.404 12:07:16 -- common/autotest_common.sh@867 -- # local i 00:23:11.404 12:07:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:11.404 12:07:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:11.404 12:07:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:11.404 12:07:16 -- common/autotest_common.sh@871 -- # break 00:23:11.404 12:07:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:11.404 12:07:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:11.404 12:07:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.404 1+0 records in 00:23:11.404 1+0 records out 00:23:11.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068143 s, 6.0 MB/s 00:23:11.662 12:07:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.662 12:07:16 -- common/autotest_common.sh@884 -- # size=4096 00:23:11.662 12:07:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.662 12:07:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:11.662 12:07:16 -- common/autotest_common.sh@887 -- # return 0 00:23:11.662 12:07:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.662 12:07:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.662 12:07:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:11.921 /dev/nbd1 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:11.921 12:07:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:11.921 12:07:17 -- common/autotest_common.sh@867 -- # local i 00:23:11.921 12:07:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:11.921 12:07:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:11.921 12:07:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:11.921 12:07:17 -- common/autotest_common.sh@871 -- # break 00:23:11.921 12:07:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:11.921 12:07:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:11.921 12:07:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.921 1+0 records in 00:23:11.921 1+0 records out 00:23:11.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611507 s, 6.7 MB/s 00:23:11.921 12:07:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.921 12:07:17 -- common/autotest_common.sh@884 -- # size=4096 00:23:11.921 12:07:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.921 12:07:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:11.921 12:07:17 -- common/autotest_common.sh@887 -- # return 0 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.921 12:07:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:11.921 12:07:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@51 -- # local i 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.921 12:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@41 -- # break 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.180 12:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@41 -- # break 00:23:12.438 12:07:17 -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.438 12:07:17 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:12.438 12:07:17 -- bdev/bdev_raid.sh@709 -- # killprocess 133820 00:23:12.438 12:07:17 -- common/autotest_common.sh@936 -- # '[' -z 133820 ']' 00:23:12.438 12:07:17 -- common/autotest_common.sh@940 -- # kill -0 133820 00:23:12.438 12:07:17 -- common/autotest_common.sh@941 -- # uname 00:23:12.438 12:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.438 12:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133820 00:23:12.697 12:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:12.697 12:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:12.697 12:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133820' 00:23:12.697 killing process with pid 133820 00:23:12.697 12:07:17 -- common/autotest_common.sh@955 -- # kill 133820 00:23:12.697 Received shutdown signal, test time was about 60.000000 seconds 00:23:12.697 00:23:12.697 Latency(us) 00:23:12.697 [2024-11-29T12:07:18.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.697 [2024-11-29T12:07:18.208Z] =================================================================================================================== 00:23:12.697 [2024-11-29T12:07:18.208Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.697 12:07:17 -- common/autotest_common.sh@960 -- # wait 133820 00:23:12.697 [2024-11-29 12:07:17.959717] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.697 [2024-11-29 12:07:17.997636] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:12.955 00:23:12.955 real 0m21.946s 00:23:12.955 user 0m31.267s 00:23:12.955 sys 0m3.618s 00:23:12.955 12:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:12.955 12:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:12.955 ************************************ 00:23:12.955 END TEST raid_rebuild_test 00:23:12.955 ************************************ 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:23:12.955 12:07:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:12.955 12:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:12.955 12:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:12.955 ************************************ 00:23:12.955 START TEST raid_rebuild_test_sb 00:23:12.955 ************************************ 00:23:12.955 12:07:18 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=134369 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134369 /var/tmp/spdk-raid.sock 00:23:12.955 12:07:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:12.955 12:07:18 -- common/autotest_common.sh@829 -- # '[' -z 134369 ']' 00:23:12.955 12:07:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:12.955 12:07:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.955 12:07:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:12.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:12.955 12:07:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.955 12:07:18 -- common/autotest_common.sh@10 -- # set +x 00:23:12.955 [2024-11-29 12:07:18.380029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:12.955 [2024-11-29 12:07:18.380503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134369 ] 00:23:12.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:12.955 Zero copy mechanism will not be used. 00:23:13.214 [2024-11-29 12:07:18.524971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.214 [2024-11-29 12:07:18.619487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.214 [2024-11-29 12:07:18.673566] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.150 12:07:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.150 12:07:19 -- common/autotest_common.sh@862 -- # return 0 00:23:14.150 12:07:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:14.150 12:07:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:14.150 12:07:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:14.150 BaseBdev1_malloc 00:23:14.150 12:07:19 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:14.408 [2024-11-29 12:07:19.922673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:14.667 [2024-11-29 12:07:19.923137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.667 [2024-11-29 12:07:19.923224] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:14.667 [2024-11-29 12:07:19.923494] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.667 [2024-11-29 12:07:19.926331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.667 [2024-11-29 12:07:19.926544] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:14.667 BaseBdev1 00:23:14.667 12:07:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:14.667 12:07:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:14.667 12:07:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:14.926 BaseBdev2_malloc 00:23:14.926 12:07:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:15.184 [2024-11-29 12:07:20.466152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:15.184 [2024-11-29 12:07:20.466581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.184 [2024-11-29 12:07:20.466671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:15.184 [2024-11-29 12:07:20.466912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.184 [2024-11-29 12:07:20.469519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.184 [2024-11-29 12:07:20.469690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:15.184 BaseBdev2 00:23:15.184 12:07:20 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:15.442 spare_malloc 00:23:15.442 12:07:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:15.700 spare_delay 00:23:15.700 12:07:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:15.700 [2024-11-29 12:07:21.202692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:15.700 [2024-11-29 12:07:21.203104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.700 [2024-11-29 12:07:21.203198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:15.700 [2024-11-29 12:07:21.203468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.700 [2024-11-29 12:07:21.206173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.700 [2024-11-29 12:07:21.206364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:15.701 spare 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:15.960 [2024-11-29 12:07:21.438880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.960 [2024-11-29 12:07:21.441457] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:15.960 [2024-11-29 12:07:21.441832] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:23:15.960 [2024-11-29 12:07:21.441974] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:15.960 [2024-11-29 12:07:21.442217] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:23:15.960 [2024-11-29 12:07:21.442797] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:23:15.960 [2024-11-29 12:07:21.442928] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:23:15.960 [2024-11-29 12:07:21.443260] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.960 12:07:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.219 12:07:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.219 "name": "raid_bdev1", 00:23:16.219 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:16.219 "strip_size_kb": 0, 00:23:16.219 "state": "online", 00:23:16.219 "raid_level": "raid1", 00:23:16.219 "superblock": true, 00:23:16.219 "num_base_bdevs": 2, 00:23:16.219 "num_base_bdevs_discovered": 2, 00:23:16.219 "num_base_bdevs_operational": 2, 00:23:16.219 "base_bdevs_list": [ 00:23:16.219 { 00:23:16.219 "name": "BaseBdev1", 00:23:16.219 "uuid": "864c9c01-224b-5a28-bdd0-68acea0709d8", 00:23:16.219 "is_configured": true, 00:23:16.219 "data_offset": 2048, 00:23:16.219 "data_size": 63488 00:23:16.219 }, 00:23:16.219 { 00:23:16.219 "name": "BaseBdev2", 00:23:16.219 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:16.219 "is_configured": true, 00:23:16.219 "data_offset": 2048, 00:23:16.219 "data_size": 63488 00:23:16.219 } 00:23:16.219 ] 00:23:16.219 }' 00:23:16.219 12:07:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.219 12:07:21 -- common/autotest_common.sh@10 -- # set +x 00:23:17.153 12:07:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.153 12:07:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:17.153 [2024-11-29 12:07:22.615685] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.153 12:07:22 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:17.153 12:07:22 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.153 12:07:22 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:17.412 12:07:22 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:17.412 12:07:22 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:17.412 12:07:22 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:17.412 12:07:22 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@12 -- # local i 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:17.412 12:07:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:17.671 [2024-11-29 12:07:23.155679] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:17.929 /dev/nbd0 00:23:17.929 12:07:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:17.929 12:07:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:17.929 12:07:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:17.929 12:07:23 -- common/autotest_common.sh@867 -- # local i 00:23:17.929 12:07:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:17.929 12:07:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:17.930 12:07:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:17.930 12:07:23 -- common/autotest_common.sh@871 -- # break 00:23:17.930 12:07:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:17.930 12:07:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:17.930 12:07:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:17.930 1+0 records in 00:23:17.930 1+0 records out 00:23:17.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680952 s, 6.0 MB/s 00:23:17.930 12:07:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:17.930 12:07:23 -- common/autotest_common.sh@884 -- # size=4096 00:23:17.930 12:07:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:17.930 12:07:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:17.930 12:07:23 -- common/autotest_common.sh@887 -- # return 0 00:23:17.930 12:07:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:17.930 12:07:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:17.930 12:07:23 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:23:17.930 12:07:23 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:23:17.930 12:07:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:23.226 63488+0 records in 00:23:23.226 63488+0 records out 00:23:23.226 32505856 bytes (33 MB, 31 MiB) copied, 5.14228 s, 6.3 MB/s 00:23:23.226 12:07:28 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@51 -- # local i 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:23.226 [2024-11-29 12:07:28.632526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@41 -- # break 00:23:23.226 12:07:28 -- bdev/nbd_common.sh@45 -- # return 0 00:23:23.226 12:07:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:23.485 [2024-11-29 12:07:28.940244] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.485 12:07:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.744 12:07:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.744 "name": "raid_bdev1", 00:23:23.744 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:23.744 "strip_size_kb": 0, 00:23:23.744 "state": "online", 00:23:23.744 "raid_level": "raid1", 00:23:23.744 "superblock": true, 00:23:23.744 "num_base_bdevs": 2, 00:23:23.744 "num_base_bdevs_discovered": 1, 00:23:23.744 "num_base_bdevs_operational": 1, 00:23:23.744 "base_bdevs_list": [ 00:23:23.744 { 00:23:23.744 "name": null, 00:23:23.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.744 "is_configured": false, 00:23:23.744 "data_offset": 2048, 00:23:23.744 "data_size": 63488 00:23:23.744 }, 00:23:23.744 { 00:23:23.744 "name": "BaseBdev2", 00:23:23.744 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:23.744 "is_configured": true, 00:23:23.744 "data_offset": 2048, 00:23:23.744 "data_size": 63488 00:23:23.744 } 00:23:23.744 ] 00:23:23.744 }' 00:23:23.744 12:07:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.744 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:24.681 12:07:29 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:24.681 [2024-11-29 12:07:30.152541] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:24.681 [2024-11-29 12:07:30.152793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:24.681 [2024-11-29 12:07:30.158374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:23:24.681 [2024-11-29 12:07:30.160919] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:24.681 12:07:30 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:26.056 "name": "raid_bdev1", 00:23:26.056 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:26.056 "strip_size_kb": 0, 00:23:26.056 "state": "online", 00:23:26.056 "raid_level": "raid1", 00:23:26.056 "superblock": true, 00:23:26.056 "num_base_bdevs": 2, 00:23:26.056 "num_base_bdevs_discovered": 2, 00:23:26.056 "num_base_bdevs_operational": 2, 00:23:26.056 "process": { 00:23:26.056 "type": "rebuild", 00:23:26.056 "target": "spare", 00:23:26.056 "progress": { 00:23:26.056 "blocks": 24576, 00:23:26.056 "percent": 38 00:23:26.056 } 00:23:26.056 }, 00:23:26.056 "base_bdevs_list": [ 00:23:26.056 { 00:23:26.056 "name": "spare", 00:23:26.056 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:26.056 "is_configured": true, 00:23:26.056 "data_offset": 2048, 00:23:26.056 "data_size": 63488 00:23:26.056 }, 00:23:26.056 { 00:23:26.056 "name": "BaseBdev2", 00:23:26.056 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:26.056 "is_configured": true, 00:23:26.056 "data_offset": 2048, 00:23:26.056 "data_size": 63488 00:23:26.056 } 00:23:26.056 ] 00:23:26.056 }' 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:26.056 12:07:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:26.314 [2024-11-29 12:07:31.775461] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:26.573 [2024-11-29 12:07:31.873429] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:26.573 [2024-11-29 12:07:31.873849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.573 12:07:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.831 12:07:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.831 "name": "raid_bdev1", 00:23:26.831 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:26.831 "strip_size_kb": 0, 00:23:26.831 "state": "online", 00:23:26.831 "raid_level": "raid1", 00:23:26.831 "superblock": true, 00:23:26.831 "num_base_bdevs": 2, 00:23:26.831 "num_base_bdevs_discovered": 1, 00:23:26.831 "num_base_bdevs_operational": 1, 00:23:26.831 "base_bdevs_list": [ 00:23:26.831 { 00:23:26.831 "name": null, 00:23:26.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.831 "is_configured": false, 00:23:26.831 "data_offset": 2048, 00:23:26.831 "data_size": 63488 00:23:26.831 }, 00:23:26.831 { 00:23:26.831 "name": "BaseBdev2", 00:23:26.831 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:26.831 "is_configured": true, 00:23:26.831 "data_offset": 2048, 00:23:26.831 "data_size": 63488 00:23:26.831 } 00:23:26.831 ] 00:23:26.831 }' 00:23:26.831 12:07:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.831 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.422 12:07:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.680 12:07:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:27.680 "name": "raid_bdev1", 00:23:27.680 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:27.680 "strip_size_kb": 0, 00:23:27.680 "state": "online", 00:23:27.680 "raid_level": "raid1", 00:23:27.680 "superblock": true, 00:23:27.680 "num_base_bdevs": 2, 00:23:27.680 "num_base_bdevs_discovered": 1, 00:23:27.680 "num_base_bdevs_operational": 1, 00:23:27.680 "base_bdevs_list": [ 00:23:27.680 { 00:23:27.680 "name": null, 00:23:27.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.681 "is_configured": false, 00:23:27.681 "data_offset": 2048, 00:23:27.681 "data_size": 63488 00:23:27.681 }, 00:23:27.681 { 00:23:27.681 "name": "BaseBdev2", 00:23:27.681 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:27.681 "is_configured": true, 00:23:27.681 "data_offset": 2048, 00:23:27.681 "data_size": 63488 00:23:27.681 } 00:23:27.681 ] 00:23:27.681 }' 00:23:27.681 12:07:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:27.681 12:07:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:27.681 12:07:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:27.939 12:07:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:27.939 12:07:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.939 [2024-11-29 12:07:33.431955] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:27.939 [2024-11-29 12:07:33.432296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.939 [2024-11-29 12:07:33.437688] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:23:27.939 [2024-11-29 12:07:33.440160] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:28.198 12:07:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.133 12:07:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:29.391 "name": "raid_bdev1", 00:23:29.391 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:29.391 "strip_size_kb": 0, 00:23:29.391 "state": "online", 00:23:29.391 "raid_level": "raid1", 00:23:29.391 "superblock": true, 00:23:29.391 "num_base_bdevs": 2, 00:23:29.391 "num_base_bdevs_discovered": 2, 00:23:29.391 "num_base_bdevs_operational": 2, 00:23:29.391 "process": { 00:23:29.391 "type": "rebuild", 00:23:29.391 "target": "spare", 00:23:29.391 "progress": { 00:23:29.391 "blocks": 24576, 00:23:29.391 "percent": 38 00:23:29.391 } 00:23:29.391 }, 00:23:29.391 "base_bdevs_list": [ 00:23:29.391 { 00:23:29.391 "name": "spare", 00:23:29.391 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:29.391 "is_configured": true, 00:23:29.391 "data_offset": 2048, 00:23:29.391 "data_size": 63488 00:23:29.391 }, 00:23:29.391 { 00:23:29.391 "name": "BaseBdev2", 00:23:29.391 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:29.391 "is_configured": true, 00:23:29.391 "data_offset": 2048, 00:23:29.391 "data_size": 63488 00:23:29.391 } 00:23:29.391 ] 00:23:29.391 }' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:29.391 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@657 -- # local timeout=428 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.391 12:07:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.650 12:07:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:29.650 "name": "raid_bdev1", 00:23:29.650 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:29.650 "strip_size_kb": 0, 00:23:29.650 "state": "online", 00:23:29.650 "raid_level": "raid1", 00:23:29.650 "superblock": true, 00:23:29.650 "num_base_bdevs": 2, 00:23:29.650 "num_base_bdevs_discovered": 2, 00:23:29.650 "num_base_bdevs_operational": 2, 00:23:29.650 "process": { 00:23:29.650 "type": "rebuild", 00:23:29.650 "target": "spare", 00:23:29.650 "progress": { 00:23:29.650 "blocks": 32768, 00:23:29.650 "percent": 51 00:23:29.650 } 00:23:29.650 }, 00:23:29.650 "base_bdevs_list": [ 00:23:29.650 { 00:23:29.650 "name": "spare", 00:23:29.650 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:29.650 "is_configured": true, 00:23:29.650 "data_offset": 2048, 00:23:29.650 "data_size": 63488 00:23:29.650 }, 00:23:29.650 { 00:23:29.650 "name": "BaseBdev2", 00:23:29.650 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:29.650 "is_configured": true, 00:23:29.650 "data_offset": 2048, 00:23:29.650 "data_size": 63488 00:23:29.650 } 00:23:29.650 ] 00:23:29.650 }' 00:23:29.650 12:07:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:29.650 12:07:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.650 12:07:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:29.908 12:07:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.908 12:07:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.843 12:07:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.102 12:07:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.102 "name": "raid_bdev1", 00:23:31.102 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:31.102 "strip_size_kb": 0, 00:23:31.102 "state": "online", 00:23:31.102 "raid_level": "raid1", 00:23:31.102 "superblock": true, 00:23:31.102 "num_base_bdevs": 2, 00:23:31.102 "num_base_bdevs_discovered": 2, 00:23:31.102 "num_base_bdevs_operational": 2, 00:23:31.102 "process": { 00:23:31.102 "type": "rebuild", 00:23:31.102 "target": "spare", 00:23:31.102 "progress": { 00:23:31.102 "blocks": 59392, 00:23:31.102 "percent": 93 00:23:31.102 } 00:23:31.102 }, 00:23:31.102 "base_bdevs_list": [ 00:23:31.102 { 00:23:31.102 "name": "spare", 00:23:31.102 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:31.102 "is_configured": true, 00:23:31.102 "data_offset": 2048, 00:23:31.102 "data_size": 63488 00:23:31.102 }, 00:23:31.102 { 00:23:31.102 "name": "BaseBdev2", 00:23:31.102 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:31.102 "is_configured": true, 00:23:31.102 "data_offset": 2048, 00:23:31.102 "data_size": 63488 00:23:31.102 } 00:23:31.102 ] 00:23:31.102 }' 00:23:31.102 12:07:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.102 12:07:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.102 12:07:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.102 [2024-11-29 12:07:36.560536] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:31.102 [2024-11-29 12:07:36.560951] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:31.102 [2024-11-29 12:07:36.561245] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.102 12:07:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.102 12:07:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:32.477 "name": "raid_bdev1", 00:23:32.477 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:32.477 "strip_size_kb": 0, 00:23:32.477 "state": "online", 00:23:32.477 "raid_level": "raid1", 00:23:32.477 "superblock": true, 00:23:32.477 "num_base_bdevs": 2, 00:23:32.477 "num_base_bdevs_discovered": 2, 00:23:32.477 "num_base_bdevs_operational": 2, 00:23:32.477 "base_bdevs_list": [ 00:23:32.477 { 00:23:32.477 "name": "spare", 00:23:32.477 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:32.477 "is_configured": true, 00:23:32.477 "data_offset": 2048, 00:23:32.477 "data_size": 63488 00:23:32.477 }, 00:23:32.477 { 00:23:32.477 "name": "BaseBdev2", 00:23:32.477 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:32.477 "is_configured": true, 00:23:32.477 "data_offset": 2048, 00:23:32.477 "data_size": 63488 00:23:32.477 } 00:23:32.477 ] 00:23:32.477 }' 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@660 -- # break 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.477 12:07:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.736 12:07:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:32.736 "name": "raid_bdev1", 00:23:32.736 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:32.736 "strip_size_kb": 0, 00:23:32.736 "state": "online", 00:23:32.736 "raid_level": "raid1", 00:23:32.736 "superblock": true, 00:23:32.736 "num_base_bdevs": 2, 00:23:32.736 "num_base_bdevs_discovered": 2, 00:23:32.736 "num_base_bdevs_operational": 2, 00:23:32.736 "base_bdevs_list": [ 00:23:32.736 { 00:23:32.736 "name": "spare", 00:23:32.736 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:32.736 "is_configured": true, 00:23:32.736 "data_offset": 2048, 00:23:32.736 "data_size": 63488 00:23:32.736 }, 00:23:32.736 { 00:23:32.736 "name": "BaseBdev2", 00:23:32.736 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:32.736 "is_configured": true, 00:23:32.736 "data_offset": 2048, 00:23:32.736 "data_size": 63488 00:23:32.736 } 00:23:32.736 ] 00:23:32.736 }' 00:23:32.736 12:07:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.994 12:07:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.252 12:07:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.252 "name": "raid_bdev1", 00:23:33.252 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:33.252 "strip_size_kb": 0, 00:23:33.252 "state": "online", 00:23:33.252 "raid_level": "raid1", 00:23:33.252 "superblock": true, 00:23:33.252 "num_base_bdevs": 2, 00:23:33.252 "num_base_bdevs_discovered": 2, 00:23:33.252 "num_base_bdevs_operational": 2, 00:23:33.252 "base_bdevs_list": [ 00:23:33.252 { 00:23:33.252 "name": "spare", 00:23:33.252 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:33.252 "is_configured": true, 00:23:33.252 "data_offset": 2048, 00:23:33.252 "data_size": 63488 00:23:33.252 }, 00:23:33.252 { 00:23:33.252 "name": "BaseBdev2", 00:23:33.252 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:33.252 "is_configured": true, 00:23:33.252 "data_offset": 2048, 00:23:33.252 "data_size": 63488 00:23:33.252 } 00:23:33.252 ] 00:23:33.252 }' 00:23:33.252 12:07:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.252 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:23:33.817 12:07:39 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:34.075 [2024-11-29 12:07:39.415560] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:34.075 [2024-11-29 12:07:39.415892] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:34.075 [2024-11-29 12:07:39.416136] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.075 [2024-11-29 12:07:39.416353] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:34.075 [2024-11-29 12:07:39.416475] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:23:34.075 12:07:39 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.075 12:07:39 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:34.333 12:07:39 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:34.333 12:07:39 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:34.333 12:07:39 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@12 -- # local i 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:34.333 12:07:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:34.591 /dev/nbd0 00:23:34.591 12:07:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:34.591 12:07:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:34.591 12:07:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:34.591 12:07:39 -- common/autotest_common.sh@867 -- # local i 00:23:34.591 12:07:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:34.591 12:07:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:34.591 12:07:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:34.591 12:07:39 -- common/autotest_common.sh@871 -- # break 00:23:34.591 12:07:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:34.591 12:07:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:34.591 12:07:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.591 1+0 records in 00:23:34.591 1+0 records out 00:23:34.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785888 s, 5.2 MB/s 00:23:34.591 12:07:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.591 12:07:39 -- common/autotest_common.sh@884 -- # size=4096 00:23:34.591 12:07:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.591 12:07:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:34.591 12:07:39 -- common/autotest_common.sh@887 -- # return 0 00:23:34.591 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:34.591 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:34.591 12:07:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:34.849 /dev/nbd1 00:23:34.849 12:07:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:34.849 12:07:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:34.849 12:07:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:34.849 12:07:40 -- common/autotest_common.sh@867 -- # local i 00:23:34.849 12:07:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:34.849 12:07:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:34.849 12:07:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:34.849 12:07:40 -- common/autotest_common.sh@871 -- # break 00:23:34.849 12:07:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:34.849 12:07:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:34.849 12:07:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.849 1+0 records in 00:23:34.849 1+0 records out 00:23:34.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660822 s, 6.2 MB/s 00:23:34.849 12:07:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.849 12:07:40 -- common/autotest_common.sh@884 -- # size=4096 00:23:34.849 12:07:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.849 12:07:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:34.849 12:07:40 -- common/autotest_common.sh@887 -- # return 0 00:23:34.849 12:07:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:34.849 12:07:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:34.849 12:07:40 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:35.107 12:07:40 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:35.107 12:07:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:35.107 12:07:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:35.107 12:07:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:35.107 12:07:40 -- bdev/nbd_common.sh@51 -- # local i 00:23:35.108 12:07:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.108 12:07:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@41 -- # break 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@45 -- # return 0 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.366 12:07:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@41 -- # break 00:23:35.624 12:07:40 -- bdev/nbd_common.sh@45 -- # return 0 00:23:35.624 12:07:40 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:35.624 12:07:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:35.624 12:07:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:35.624 12:07:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:35.883 12:07:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:36.141 [2024-11-29 12:07:41.444087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:36.141 [2024-11-29 12:07:41.444523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.141 [2024-11-29 12:07:41.444613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:36.141 [2024-11-29 12:07:41.444865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.141 [2024-11-29 12:07:41.447583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.141 [2024-11-29 12:07:41.447797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:36.141 [2024-11-29 12:07:41.448028] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:36.141 [2024-11-29 12:07:41.448209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:36.141 BaseBdev1 00:23:36.141 12:07:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:36.141 12:07:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:36.141 12:07:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:36.399 12:07:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:36.657 [2024-11-29 12:07:41.928245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:36.657 [2024-11-29 12:07:41.928645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.657 [2024-11-29 12:07:41.928751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:36.657 [2024-11-29 12:07:41.928986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.657 [2024-11-29 12:07:41.929505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.657 [2024-11-29 12:07:41.929697] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:36.657 [2024-11-29 12:07:41.929970] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:36.657 [2024-11-29 12:07:41.930101] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:36.657 [2024-11-29 12:07:41.930209] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.657 [2024-11-29 12:07:41.930393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:23:36.657 [2024-11-29 12:07:41.930557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:36.657 BaseBdev2 00:23:36.657 12:07:41 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:36.915 12:07:42 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:37.172 [2024-11-29 12:07:42.520372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:37.172 [2024-11-29 12:07:42.520804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.172 [2024-11-29 12:07:42.520907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:37.173 [2024-11-29 12:07:42.521078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.173 [2024-11-29 12:07:42.521648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.173 [2024-11-29 12:07:42.521820] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:37.173 [2024-11-29 12:07:42.522060] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:37.173 [2024-11-29 12:07:42.522223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.173 spare 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.173 12:07:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.173 [2024-11-29 12:07:42.622554] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:23:37.173 [2024-11-29 12:07:42.622877] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:37.173 [2024-11-29 12:07:42.623143] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:23:37.173 [2024-11-29 12:07:42.623784] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:23:37.173 [2024-11-29 12:07:42.623913] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:23:37.173 [2024-11-29 12:07:42.624165] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.430 12:07:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.430 "name": "raid_bdev1", 00:23:37.430 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:37.430 "strip_size_kb": 0, 00:23:37.430 "state": "online", 00:23:37.430 "raid_level": "raid1", 00:23:37.430 "superblock": true, 00:23:37.430 "num_base_bdevs": 2, 00:23:37.430 "num_base_bdevs_discovered": 2, 00:23:37.430 "num_base_bdevs_operational": 2, 00:23:37.430 "base_bdevs_list": [ 00:23:37.430 { 00:23:37.430 "name": "spare", 00:23:37.430 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:37.430 "is_configured": true, 00:23:37.430 "data_offset": 2048, 00:23:37.430 "data_size": 63488 00:23:37.430 }, 00:23:37.430 { 00:23:37.430 "name": "BaseBdev2", 00:23:37.430 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:37.430 "is_configured": true, 00:23:37.430 "data_offset": 2048, 00:23:37.430 "data_size": 63488 00:23:37.430 } 00:23:37.430 ] 00:23:37.430 }' 00:23:37.430 12:07:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.430 12:07:42 -- common/autotest_common.sh@10 -- # set +x 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.995 12:07:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.253 12:07:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:38.253 "name": "raid_bdev1", 00:23:38.253 "uuid": "a8293ac2-ccde-4e7c-a2c3-7fec66cb25ca", 00:23:38.253 "strip_size_kb": 0, 00:23:38.253 "state": "online", 00:23:38.253 "raid_level": "raid1", 00:23:38.253 "superblock": true, 00:23:38.253 "num_base_bdevs": 2, 00:23:38.253 "num_base_bdevs_discovered": 2, 00:23:38.253 "num_base_bdevs_operational": 2, 00:23:38.253 "base_bdevs_list": [ 00:23:38.253 { 00:23:38.253 "name": "spare", 00:23:38.253 "uuid": "14aa1c9c-d3d1-5907-8554-9ce7ed632df8", 00:23:38.253 "is_configured": true, 00:23:38.253 "data_offset": 2048, 00:23:38.253 "data_size": 63488 00:23:38.253 }, 00:23:38.253 { 00:23:38.253 "name": "BaseBdev2", 00:23:38.253 "uuid": "aeee5e43-9982-5174-a7ce-b4fc5ce2383a", 00:23:38.253 "is_configured": true, 00:23:38.253 "data_offset": 2048, 00:23:38.253 "data_size": 63488 00:23:38.253 } 00:23:38.253 ] 00:23:38.253 }' 00:23:38.253 12:07:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:38.512 12:07:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:38.512 12:07:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:38.512 12:07:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:38.512 12:07:43 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.512 12:07:43 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:38.770 12:07:44 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.770 12:07:44 -- bdev/bdev_raid.sh@709 -- # killprocess 134369 00:23:38.770 12:07:44 -- common/autotest_common.sh@936 -- # '[' -z 134369 ']' 00:23:38.770 12:07:44 -- common/autotest_common.sh@940 -- # kill -0 134369 00:23:38.770 12:07:44 -- common/autotest_common.sh@941 -- # uname 00:23:38.770 12:07:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:38.770 12:07:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134369 00:23:38.770 12:07:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:38.770 12:07:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:38.770 12:07:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134369' 00:23:38.770 killing process with pid 134369 00:23:38.770 12:07:44 -- common/autotest_common.sh@955 -- # kill 134369 00:23:38.770 Received shutdown signal, test time was about 60.000000 seconds 00:23:38.770 00:23:38.770 Latency(us) 00:23:38.770 [2024-11-29T12:07:44.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.770 [2024-11-29T12:07:44.281Z] =================================================================================================================== 00:23:38.770 [2024-11-29T12:07:44.281Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.770 12:07:44 -- common/autotest_common.sh@960 -- # wait 134369 00:23:38.770 [2024-11-29 12:07:44.134280] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:38.770 [2024-11-29 12:07:44.134550] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:38.770 [2024-11-29 12:07:44.134742] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:38.770 [2024-11-29 12:07:44.134879] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:23:38.770 [2024-11-29 12:07:44.171968] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:39.030 00:23:39.030 real 0m26.105s 00:23:39.030 user 0m38.850s 00:23:39.030 sys 0m4.140s 00:23:39.030 12:07:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:39.030 12:07:44 -- common/autotest_common.sh@10 -- # set +x 00:23:39.030 ************************************ 00:23:39.030 END TEST raid_rebuild_test_sb 00:23:39.030 ************************************ 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:23:39.030 12:07:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:39.030 12:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.030 12:07:44 -- common/autotest_common.sh@10 -- # set +x 00:23:39.030 ************************************ 00:23:39.030 START TEST raid_rebuild_test_io 00:23:39.030 ************************************ 00:23:39.030 12:07:44 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@544 -- # raid_pid=135006 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:39.030 12:07:44 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135006 /var/tmp/spdk-raid.sock 00:23:39.030 12:07:44 -- common/autotest_common.sh@829 -- # '[' -z 135006 ']' 00:23:39.030 12:07:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:39.030 12:07:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.030 12:07:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:39.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:39.030 12:07:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.030 12:07:44 -- common/autotest_common.sh@10 -- # set +x 00:23:39.290 [2024-11-29 12:07:44.563181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:39.290 [2024-11-29 12:07:44.563696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135006 ] 00:23:39.290 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:39.290 Zero copy mechanism will not be used. 00:23:39.290 [2024-11-29 12:07:44.714056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.548 [2024-11-29 12:07:44.814621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.548 [2024-11-29 12:07:44.873177] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:40.111 12:07:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.111 12:07:45 -- common/autotest_common.sh@862 -- # return 0 00:23:40.111 12:07:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:40.111 12:07:45 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:40.111 12:07:45 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:40.368 BaseBdev1 00:23:40.368 12:07:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:40.368 12:07:45 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:40.368 12:07:45 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:40.629 BaseBdev2 00:23:40.629 12:07:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:40.886 spare_malloc 00:23:41.144 12:07:46 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:41.144 spare_delay 00:23:41.403 12:07:46 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:41.403 [2024-11-29 12:07:46.891002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:41.403 [2024-11-29 12:07:46.891425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.403 [2024-11-29 12:07:46.891527] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:41.403 [2024-11-29 12:07:46.891795] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.403 [2024-11-29 12:07:46.894723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.403 [2024-11-29 12:07:46.894921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:41.403 spare 00:23:41.403 12:07:46 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:41.661 [2024-11-29 12:07:47.155404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:41.661 [2024-11-29 12:07:47.159271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.661 [2024-11-29 12:07:47.159540] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:41.661 [2024-11-29 12:07:47.159692] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:41.661 [2024-11-29 12:07:47.159931] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:41.661 [2024-11-29 12:07:47.160519] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:41.661 [2024-11-29 12:07:47.160653] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:23:41.661 [2024-11-29 12:07:47.161045] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.920 12:07:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.178 12:07:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.178 "name": "raid_bdev1", 00:23:42.178 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:42.178 "strip_size_kb": 0, 00:23:42.178 "state": "online", 00:23:42.178 "raid_level": "raid1", 00:23:42.178 "superblock": false, 00:23:42.178 "num_base_bdevs": 2, 00:23:42.178 "num_base_bdevs_discovered": 2, 00:23:42.178 "num_base_bdevs_operational": 2, 00:23:42.178 "base_bdevs_list": [ 00:23:42.178 { 00:23:42.178 "name": "BaseBdev1", 00:23:42.178 "uuid": "94ea5895-a268-4da8-9ce8-ea63a9392629", 00:23:42.178 "is_configured": true, 00:23:42.178 "data_offset": 0, 00:23:42.178 "data_size": 65536 00:23:42.178 }, 00:23:42.178 { 00:23:42.178 "name": "BaseBdev2", 00:23:42.178 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:42.178 "is_configured": true, 00:23:42.178 "data_offset": 0, 00:23:42.178 "data_size": 65536 00:23:42.178 } 00:23:42.178 ] 00:23:42.178 }' 00:23:42.178 12:07:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.178 12:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:42.745 12:07:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:42.745 12:07:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:43.004 [2024-11-29 12:07:48.316181] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:43.004 12:07:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:43.004 12:07:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.004 12:07:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:43.262 12:07:48 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:43.262 12:07:48 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:43.262 12:07:48 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:43.262 12:07:48 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:43.262 [2024-11-29 12:07:48.666598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:23:43.262 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:43.262 Zero copy mechanism will not be used. 00:23:43.262 Running I/O for 60 seconds... 00:23:43.520 [2024-11-29 12:07:48.815481] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:43.520 [2024-11-29 12:07:48.816048] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.520 12:07:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.778 12:07:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:43.778 "name": "raid_bdev1", 00:23:43.778 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:43.778 "strip_size_kb": 0, 00:23:43.778 "state": "online", 00:23:43.778 "raid_level": "raid1", 00:23:43.778 "superblock": false, 00:23:43.778 "num_base_bdevs": 2, 00:23:43.778 "num_base_bdevs_discovered": 1, 00:23:43.778 "num_base_bdevs_operational": 1, 00:23:43.778 "base_bdevs_list": [ 00:23:43.778 { 00:23:43.778 "name": null, 00:23:43.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.778 "is_configured": false, 00:23:43.778 "data_offset": 0, 00:23:43.778 "data_size": 65536 00:23:43.778 }, 00:23:43.778 { 00:23:43.778 "name": "BaseBdev2", 00:23:43.778 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:43.778 "is_configured": true, 00:23:43.778 "data_offset": 0, 00:23:43.778 "data_size": 65536 00:23:43.778 } 00:23:43.778 ] 00:23:43.778 }' 00:23:43.778 12:07:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:43.778 12:07:49 -- common/autotest_common.sh@10 -- # set +x 00:23:44.346 12:07:49 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:44.913 [2024-11-29 12:07:50.121255] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:44.913 [2024-11-29 12:07:50.121636] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:44.913 12:07:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:44.913 [2024-11-29 12:07:50.165142] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:23:44.913 [2024-11-29 12:07:50.167656] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:44.913 [2024-11-29 12:07:50.286159] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:44.913 [2024-11-29 12:07:50.287136] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:45.172 [2024-11-29 12:07:50.497483] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:45.172 [2024-11-29 12:07:50.498153] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:45.430 [2024-11-29 12:07:50.854065] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:45.430 [2024-11-29 12:07:50.862415] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:45.689 [2024-11-29 12:07:51.076297] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.689 12:07:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.947 [2024-11-29 12:07:51.408662] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:45.947 12:07:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:45.947 "name": "raid_bdev1", 00:23:45.947 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:45.947 "strip_size_kb": 0, 00:23:45.947 "state": "online", 00:23:45.947 "raid_level": "raid1", 00:23:45.947 "superblock": false, 00:23:45.947 "num_base_bdevs": 2, 00:23:45.947 "num_base_bdevs_discovered": 2, 00:23:45.947 "num_base_bdevs_operational": 2, 00:23:45.947 "process": { 00:23:45.947 "type": "rebuild", 00:23:45.947 "target": "spare", 00:23:45.947 "progress": { 00:23:45.947 "blocks": 12288, 00:23:45.947 "percent": 18 00:23:45.947 } 00:23:45.947 }, 00:23:45.947 "base_bdevs_list": [ 00:23:45.947 { 00:23:45.947 "name": "spare", 00:23:45.947 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:45.947 "is_configured": true, 00:23:45.947 "data_offset": 0, 00:23:45.947 "data_size": 65536 00:23:45.947 }, 00:23:45.947 { 00:23:45.947 "name": "BaseBdev2", 00:23:45.947 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:45.947 "is_configured": true, 00:23:45.947 "data_offset": 0, 00:23:45.947 "data_size": 65536 00:23:45.947 } 00:23:45.947 ] 00:23:45.947 }' 00:23:45.947 12:07:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:46.206 12:07:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.207 12:07:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:46.207 12:07:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.207 12:07:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:46.207 [2024-11-29 12:07:51.628374] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:46.207 [2024-11-29 12:07:51.628998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:46.465 [2024-11-29 12:07:51.774914] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:46.465 [2024-11-29 12:07:51.884232] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:46.465 [2024-11-29 12:07:51.902319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.465 [2024-11-29 12:07:51.925686] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.466 12:07:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.034 12:07:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.034 "name": "raid_bdev1", 00:23:47.034 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:47.034 "strip_size_kb": 0, 00:23:47.034 "state": "online", 00:23:47.034 "raid_level": "raid1", 00:23:47.034 "superblock": false, 00:23:47.034 "num_base_bdevs": 2, 00:23:47.034 "num_base_bdevs_discovered": 1, 00:23:47.034 "num_base_bdevs_operational": 1, 00:23:47.034 "base_bdevs_list": [ 00:23:47.034 { 00:23:47.034 "name": null, 00:23:47.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.034 "is_configured": false, 00:23:47.034 "data_offset": 0, 00:23:47.034 "data_size": 65536 00:23:47.034 }, 00:23:47.034 { 00:23:47.034 "name": "BaseBdev2", 00:23:47.034 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:47.034 "is_configured": true, 00:23:47.034 "data_offset": 0, 00:23:47.034 "data_size": 65536 00:23:47.034 } 00:23:47.034 ] 00:23:47.034 }' 00:23:47.034 12:07:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.034 12:07:52 -- common/autotest_common.sh@10 -- # set +x 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.602 12:07:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.861 12:07:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.861 "name": "raid_bdev1", 00:23:47.861 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:47.861 "strip_size_kb": 0, 00:23:47.861 "state": "online", 00:23:47.861 "raid_level": "raid1", 00:23:47.861 "superblock": false, 00:23:47.861 "num_base_bdevs": 2, 00:23:47.861 "num_base_bdevs_discovered": 1, 00:23:47.861 "num_base_bdevs_operational": 1, 00:23:47.861 "base_bdevs_list": [ 00:23:47.861 { 00:23:47.861 "name": null, 00:23:47.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.861 "is_configured": false, 00:23:47.861 "data_offset": 0, 00:23:47.861 "data_size": 65536 00:23:47.861 }, 00:23:47.861 { 00:23:47.861 "name": "BaseBdev2", 00:23:47.861 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:47.861 "is_configured": true, 00:23:47.861 "data_offset": 0, 00:23:47.861 "data_size": 65536 00:23:47.861 } 00:23:47.861 ] 00:23:47.861 }' 00:23:47.861 12:07:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.861 12:07:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:47.861 12:07:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.861 12:07:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:47.861 12:07:53 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:48.119 [2024-11-29 12:07:53.585514] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:48.119 [2024-11-29 12:07:53.585901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.119 [2024-11-29 12:07:53.621097] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:48.119 [2024-11-29 12:07:53.623674] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:48.120 12:07:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:48.378 [2024-11-29 12:07:53.734145] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:48.378 [2024-11-29 12:07:53.735091] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:48.378 [2024-11-29 12:07:53.854203] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:48.378 [2024-11-29 12:07:53.854808] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:48.945 [2024-11-29 12:07:54.240377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:49.203 [2024-11-29 12:07:54.570050] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:49.203 [2024-11-29 12:07:54.570965] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.203 12:07:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.461 [2024-11-29 12:07:54.782862] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:49.461 12:07:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.461 "name": "raid_bdev1", 00:23:49.461 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:49.461 "strip_size_kb": 0, 00:23:49.461 "state": "online", 00:23:49.461 "raid_level": "raid1", 00:23:49.461 "superblock": false, 00:23:49.461 "num_base_bdevs": 2, 00:23:49.461 "num_base_bdevs_discovered": 2, 00:23:49.461 "num_base_bdevs_operational": 2, 00:23:49.461 "process": { 00:23:49.461 "type": "rebuild", 00:23:49.461 "target": "spare", 00:23:49.461 "progress": { 00:23:49.461 "blocks": 16384, 00:23:49.461 "percent": 25 00:23:49.461 } 00:23:49.461 }, 00:23:49.461 "base_bdevs_list": [ 00:23:49.461 { 00:23:49.461 "name": "spare", 00:23:49.461 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:49.461 "is_configured": true, 00:23:49.461 "data_offset": 0, 00:23:49.461 "data_size": 65536 00:23:49.461 }, 00:23:49.461 { 00:23:49.461 "name": "BaseBdev2", 00:23:49.461 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:49.461 "is_configured": true, 00:23:49.461 "data_offset": 0, 00:23:49.461 "data_size": 65536 00:23:49.461 } 00:23:49.461 ] 00:23:49.461 }' 00:23:49.461 12:07:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.461 12:07:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.461 12:07:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@657 -- # local timeout=448 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.720 12:07:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.720 [2024-11-29 12:07:55.043460] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:49.977 [2024-11-29 12:07:55.286651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:49.977 [2024-11-29 12:07:55.287295] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:49.977 12:07:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.977 "name": "raid_bdev1", 00:23:49.977 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:49.977 "strip_size_kb": 0, 00:23:49.977 "state": "online", 00:23:49.977 "raid_level": "raid1", 00:23:49.977 "superblock": false, 00:23:49.977 "num_base_bdevs": 2, 00:23:49.977 "num_base_bdevs_discovered": 2, 00:23:49.977 "num_base_bdevs_operational": 2, 00:23:49.977 "process": { 00:23:49.977 "type": "rebuild", 00:23:49.977 "target": "spare", 00:23:49.977 "progress": { 00:23:49.977 "blocks": 22528, 00:23:49.977 "percent": 34 00:23:49.977 } 00:23:49.977 }, 00:23:49.977 "base_bdevs_list": [ 00:23:49.977 { 00:23:49.977 "name": "spare", 00:23:49.977 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:49.977 "is_configured": true, 00:23:49.977 "data_offset": 0, 00:23:49.977 "data_size": 65536 00:23:49.977 }, 00:23:49.977 { 00:23:49.977 "name": "BaseBdev2", 00:23:49.977 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:49.977 "is_configured": true, 00:23:49.977 "data_offset": 0, 00:23:49.977 "data_size": 65536 00:23:49.977 } 00:23:49.977 ] 00:23:49.977 }' 00:23:49.977 12:07:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.977 12:07:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.977 12:07:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.977 12:07:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.977 12:07:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:50.234 [2024-11-29 12:07:55.609875] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:50.491 [2024-11-29 12:07:55.946942] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:50.747 [2024-11-29 12:07:56.059315] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:51.005 [2024-11-29 12:07:56.273550] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.005 12:07:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.262 12:07:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.262 "name": "raid_bdev1", 00:23:51.262 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:51.262 "strip_size_kb": 0, 00:23:51.262 "state": "online", 00:23:51.262 "raid_level": "raid1", 00:23:51.262 "superblock": false, 00:23:51.262 "num_base_bdevs": 2, 00:23:51.262 "num_base_bdevs_discovered": 2, 00:23:51.262 "num_base_bdevs_operational": 2, 00:23:51.262 "process": { 00:23:51.262 "type": "rebuild", 00:23:51.262 "target": "spare", 00:23:51.262 "progress": { 00:23:51.262 "blocks": 43008, 00:23:51.262 "percent": 65 00:23:51.262 } 00:23:51.262 }, 00:23:51.262 "base_bdevs_list": [ 00:23:51.262 { 00:23:51.262 "name": "spare", 00:23:51.262 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:51.262 "is_configured": true, 00:23:51.262 "data_offset": 0, 00:23:51.262 "data_size": 65536 00:23:51.262 }, 00:23:51.262 { 00:23:51.262 "name": "BaseBdev2", 00:23:51.262 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:51.262 "is_configured": true, 00:23:51.262 "data_offset": 0, 00:23:51.262 "data_size": 65536 00:23:51.262 } 00:23:51.262 ] 00:23:51.262 }' 00:23:51.262 12:07:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.262 12:07:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.262 12:07:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.520 12:07:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.520 12:07:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:51.777 [2024-11-29 12:07:57.077996] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:51.777 [2024-11-29 12:07:57.078708] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:52.344 [2024-11-29 12:07:57.771635] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.344 12:07:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.602 [2024-11-29 12:07:57.879252] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:52.602 [2024-11-29 12:07:57.882537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.602 12:07:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.602 "name": "raid_bdev1", 00:23:52.602 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:52.602 "strip_size_kb": 0, 00:23:52.602 "state": "online", 00:23:52.602 "raid_level": "raid1", 00:23:52.602 "superblock": false, 00:23:52.602 "num_base_bdevs": 2, 00:23:52.602 "num_base_bdevs_discovered": 2, 00:23:52.602 "num_base_bdevs_operational": 2, 00:23:52.602 "base_bdevs_list": [ 00:23:52.602 { 00:23:52.602 "name": "spare", 00:23:52.602 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:52.602 "is_configured": true, 00:23:52.602 "data_offset": 0, 00:23:52.602 "data_size": 65536 00:23:52.602 }, 00:23:52.602 { 00:23:52.602 "name": "BaseBdev2", 00:23:52.602 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:52.602 "is_configured": true, 00:23:52.602 "data_offset": 0, 00:23:52.602 "data_size": 65536 00:23:52.602 } 00:23:52.602 ] 00:23:52.602 }' 00:23:52.602 12:07:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.602 12:07:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:52.602 12:07:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@660 -- # break 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.860 12:07:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.117 12:07:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.117 "name": "raid_bdev1", 00:23:53.117 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:53.117 "strip_size_kb": 0, 00:23:53.117 "state": "online", 00:23:53.117 "raid_level": "raid1", 00:23:53.117 "superblock": false, 00:23:53.117 "num_base_bdevs": 2, 00:23:53.117 "num_base_bdevs_discovered": 2, 00:23:53.117 "num_base_bdevs_operational": 2, 00:23:53.117 "base_bdevs_list": [ 00:23:53.117 { 00:23:53.117 "name": "spare", 00:23:53.117 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:53.117 "is_configured": true, 00:23:53.117 "data_offset": 0, 00:23:53.117 "data_size": 65536 00:23:53.117 }, 00:23:53.117 { 00:23:53.117 "name": "BaseBdev2", 00:23:53.117 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:53.117 "is_configured": true, 00:23:53.117 "data_offset": 0, 00:23:53.117 "data_size": 65536 00:23:53.117 } 00:23:53.117 ] 00:23:53.117 }' 00:23:53.117 12:07:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.117 12:07:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:53.117 12:07:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.118 12:07:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.376 12:07:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:53.376 "name": "raid_bdev1", 00:23:53.376 "uuid": "f85df196-b9db-4cc9-9769-65fe7ed13a45", 00:23:53.376 "strip_size_kb": 0, 00:23:53.376 "state": "online", 00:23:53.376 "raid_level": "raid1", 00:23:53.376 "superblock": false, 00:23:53.376 "num_base_bdevs": 2, 00:23:53.376 "num_base_bdevs_discovered": 2, 00:23:53.376 "num_base_bdevs_operational": 2, 00:23:53.376 "base_bdevs_list": [ 00:23:53.376 { 00:23:53.376 "name": "spare", 00:23:53.376 "uuid": "e12ab9a3-0344-549b-9d56-257cdb70f514", 00:23:53.376 "is_configured": true, 00:23:53.376 "data_offset": 0, 00:23:53.376 "data_size": 65536 00:23:53.376 }, 00:23:53.376 { 00:23:53.376 "name": "BaseBdev2", 00:23:53.376 "uuid": "4ece734f-d4a9-4495-95d9-09b65cb76f5b", 00:23:53.376 "is_configured": true, 00:23:53.376 "data_offset": 0, 00:23:53.376 "data_size": 65536 00:23:53.376 } 00:23:53.376 ] 00:23:53.376 }' 00:23:53.376 12:07:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:53.376 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:53.970 12:07:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:54.228 [2024-11-29 12:07:59.735406] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:54.228 [2024-11-29 12:07:59.735697] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:54.487 00:23:54.487 Latency(us) 00:23:54.487 [2024-11-29T12:07:59.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.487 [2024-11-29T12:07:59.998Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:54.487 raid_bdev1 : 11.08 106.30 318.90 0.00 0.00 12665.77 338.85 111053.73 00:23:54.487 [2024-11-29T12:07:59.998Z] =================================================================================================================== 00:23:54.487 [2024-11-29T12:07:59.998Z] Total : 106.30 318.90 0.00 0.00 12665.77 338.85 111053.73 00:23:54.487 [2024-11-29 12:07:59.756535] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.487 [2024-11-29 12:07:59.756763] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.487 [2024-11-29 12:07:59.756917] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.487 0 00:23:54.487 [2024-11-29 12:07:59.757126] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:23:54.487 12:07:59 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.487 12:07:59 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:54.745 12:08:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:54.745 12:08:00 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:54.745 12:08:00 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@12 -- # local i 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.745 12:08:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:55.002 /dev/nbd0 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:55.002 12:08:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:55.002 12:08:00 -- common/autotest_common.sh@867 -- # local i 00:23:55.002 12:08:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:55.002 12:08:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:55.002 12:08:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:55.002 12:08:00 -- common/autotest_common.sh@871 -- # break 00:23:55.002 12:08:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:55.002 12:08:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:55.002 12:08:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:55.002 1+0 records in 00:23:55.002 1+0 records out 00:23:55.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617112 s, 6.6 MB/s 00:23:55.002 12:08:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:55.002 12:08:00 -- common/autotest_common.sh@884 -- # size=4096 00:23:55.002 12:08:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:55.002 12:08:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:55.002 12:08:00 -- common/autotest_common.sh@887 -- # return 0 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:55.002 12:08:00 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:55.002 12:08:00 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:23:55.002 12:08:00 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@12 -- # local i 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:55.002 12:08:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:55.259 /dev/nbd1 00:23:55.259 12:08:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:55.259 12:08:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:55.259 12:08:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:55.259 12:08:00 -- common/autotest_common.sh@867 -- # local i 00:23:55.259 12:08:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:55.259 12:08:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:55.259 12:08:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:55.259 12:08:00 -- common/autotest_common.sh@871 -- # break 00:23:55.259 12:08:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:55.259 12:08:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:55.259 12:08:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:55.259 1+0 records in 00:23:55.259 1+0 records out 00:23:55.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643826 s, 6.4 MB/s 00:23:55.259 12:08:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:55.260 12:08:00 -- common/autotest_common.sh@884 -- # size=4096 00:23:55.260 12:08:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:55.260 12:08:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:55.260 12:08:00 -- common/autotest_common.sh@887 -- # return 0 00:23:55.260 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:55.260 12:08:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:55.260 12:08:00 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:55.525 12:08:00 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:55.525 12:08:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:55.525 12:08:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:55.526 12:08:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:55.526 12:08:00 -- bdev/nbd_common.sh@51 -- # local i 00:23:55.526 12:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:55.526 12:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@41 -- # break 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:23:55.794 12:08:01 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@51 -- # local i 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:55.794 12:08:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@41 -- # break 00:23:56.051 12:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:23:56.051 12:08:01 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:56.051 12:08:01 -- bdev/bdev_raid.sh@709 -- # killprocess 135006 00:23:56.051 12:08:01 -- common/autotest_common.sh@936 -- # '[' -z 135006 ']' 00:23:56.051 12:08:01 -- common/autotest_common.sh@940 -- # kill -0 135006 00:23:56.051 12:08:01 -- common/autotest_common.sh@941 -- # uname 00:23:56.051 12:08:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.051 12:08:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135006 00:23:56.051 12:08:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:56.051 12:08:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:56.051 12:08:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135006' 00:23:56.051 killing process with pid 135006 00:23:56.051 12:08:01 -- common/autotest_common.sh@955 -- # kill 135006 00:23:56.051 Received shutdown signal, test time was about 12.690034 seconds 00:23:56.051 00:23:56.051 Latency(us) 00:23:56.051 [2024-11-29T12:08:01.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.051 [2024-11-29T12:08:01.562Z] =================================================================================================================== 00:23:56.051 [2024-11-29T12:08:01.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.051 [2024-11-29 12:08:01.359451] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:56.051 12:08:01 -- common/autotest_common.sh@960 -- # wait 135006 00:23:56.051 [2024-11-29 12:08:01.391462] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:56.310 ************************************ 00:23:56.310 END TEST raid_rebuild_test_io 00:23:56.310 ************************************ 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:56.310 00:23:56.310 real 0m17.167s 00:23:56.310 user 0m27.546s 00:23:56.310 sys 0m2.063s 00:23:56.310 12:08:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:56.310 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:23:56.310 12:08:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:56.310 12:08:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:56.310 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:56.310 ************************************ 00:23:56.310 START TEST raid_rebuild_test_sb_io 00:23:56.310 ************************************ 00:23:56.310 12:08:01 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=135470 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135470 /var/tmp/spdk-raid.sock 00:23:56.310 12:08:01 -- common/autotest_common.sh@829 -- # '[' -z 135470 ']' 00:23:56.310 12:08:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:56.310 12:08:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:56.310 12:08:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.310 12:08:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:56.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:56.310 12:08:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.310 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:56.310 [2024-11-29 12:08:01.757005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:56.310 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:56.310 Zero copy mechanism will not be used. 00:23:56.310 [2024-11-29 12:08:01.757225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135470 ] 00:23:56.567 [2024-11-29 12:08:01.900841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.567 [2024-11-29 12:08:01.996485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.567 [2024-11-29 12:08:02.051033] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.499 12:08:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.499 12:08:02 -- common/autotest_common.sh@862 -- # return 0 00:23:57.499 12:08:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:57.499 12:08:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:57.499 12:08:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:57.499 BaseBdev1_malloc 00:23:57.758 12:08:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:57.758 [2024-11-29 12:08:03.231660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:57.758 [2024-11-29 12:08:03.231787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.758 [2024-11-29 12:08:03.231844] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:57.758 [2024-11-29 12:08:03.231904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.758 [2024-11-29 12:08:03.234672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.758 [2024-11-29 12:08:03.234745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:57.758 BaseBdev1 00:23:57.759 12:08:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:57.759 12:08:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:57.759 12:08:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:58.017 BaseBdev2_malloc 00:23:58.017 12:08:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:58.275 [2024-11-29 12:08:03.708336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:58.275 [2024-11-29 12:08:03.708461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.275 [2024-11-29 12:08:03.708511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:58.275 [2024-11-29 12:08:03.708562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.275 [2024-11-29 12:08:03.711173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.275 [2024-11-29 12:08:03.711233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:58.275 BaseBdev2 00:23:58.275 12:08:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:58.534 spare_malloc 00:23:58.534 12:08:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:58.792 spare_delay 00:23:58.792 12:08:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:59.052 [2024-11-29 12:08:04.415347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:59.052 [2024-11-29 12:08:04.415475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.052 [2024-11-29 12:08:04.415529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:59.052 [2024-11-29 12:08:04.415581] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.052 [2024-11-29 12:08:04.418304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.052 [2024-11-29 12:08:04.418389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:59.052 spare 00:23:59.052 12:08:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:59.311 [2024-11-29 12:08:04.643492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:59.311 [2024-11-29 12:08:04.645805] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:59.311 [2024-11-29 12:08:04.646068] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:23:59.311 [2024-11-29 12:08:04.646086] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:59.311 [2024-11-29 12:08:04.646270] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:23:59.311 [2024-11-29 12:08:04.646758] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:23:59.311 [2024-11-29 12:08:04.646783] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:23:59.311 [2024-11-29 12:08:04.646976] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.311 12:08:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.569 12:08:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:59.569 "name": "raid_bdev1", 00:23:59.569 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:23:59.569 "strip_size_kb": 0, 00:23:59.569 "state": "online", 00:23:59.569 "raid_level": "raid1", 00:23:59.569 "superblock": true, 00:23:59.569 "num_base_bdevs": 2, 00:23:59.569 "num_base_bdevs_discovered": 2, 00:23:59.569 "num_base_bdevs_operational": 2, 00:23:59.569 "base_bdevs_list": [ 00:23:59.569 { 00:23:59.569 "name": "BaseBdev1", 00:23:59.569 "uuid": "70ad4370-3ea1-52a5-835d-610c0bdc8bd7", 00:23:59.569 "is_configured": true, 00:23:59.569 "data_offset": 2048, 00:23:59.569 "data_size": 63488 00:23:59.569 }, 00:23:59.569 { 00:23:59.569 "name": "BaseBdev2", 00:23:59.569 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:23:59.569 "is_configured": true, 00:23:59.569 "data_offset": 2048, 00:23:59.569 "data_size": 63488 00:23:59.569 } 00:23:59.569 ] 00:23:59.569 }' 00:23:59.569 12:08:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:59.569 12:08:04 -- common/autotest_common.sh@10 -- # set +x 00:24:00.504 12:08:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:00.504 12:08:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:00.504 [2024-11-29 12:08:05.931906] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.504 12:08:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:24:00.504 12:08:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.504 12:08:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:00.763 12:08:06 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:00.763 12:08:06 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:24:00.763 12:08:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:00.763 12:08:06 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:01.023 [2024-11-29 12:08:06.294404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:24:01.023 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:01.023 Zero copy mechanism will not be used. 00:24:01.023 Running I/O for 60 seconds... 00:24:01.023 [2024-11-29 12:08:06.478054] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:01.023 [2024-11-29 12:08:06.485752] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.023 12:08:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.282 12:08:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.282 "name": "raid_bdev1", 00:24:01.282 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:01.282 "strip_size_kb": 0, 00:24:01.282 "state": "online", 00:24:01.282 "raid_level": "raid1", 00:24:01.282 "superblock": true, 00:24:01.282 "num_base_bdevs": 2, 00:24:01.282 "num_base_bdevs_discovered": 1, 00:24:01.282 "num_base_bdevs_operational": 1, 00:24:01.282 "base_bdevs_list": [ 00:24:01.282 { 00:24:01.282 "name": null, 00:24:01.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.282 "is_configured": false, 00:24:01.282 "data_offset": 2048, 00:24:01.282 "data_size": 63488 00:24:01.282 }, 00:24:01.282 { 00:24:01.282 "name": "BaseBdev2", 00:24:01.282 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:01.282 "is_configured": true, 00:24:01.282 "data_offset": 2048, 00:24:01.282 "data_size": 63488 00:24:01.282 } 00:24:01.282 ] 00:24:01.282 }' 00:24:01.282 12:08:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.282 12:08:06 -- common/autotest_common.sh@10 -- # set +x 00:24:02.218 12:08:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:02.218 [2024-11-29 12:08:07.694232] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:02.218 [2024-11-29 12:08:07.694319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:02.477 12:08:07 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:02.477 [2024-11-29 12:08:07.752818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:02.477 [2024-11-29 12:08:07.755315] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:02.477 [2024-11-29 12:08:07.874267] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:02.477 [2024-11-29 12:08:07.874904] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:02.736 [2024-11-29 12:08:08.094692] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:02.736 [2024-11-29 12:08:08.095041] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:03.302 [2024-11-29 12:08:08.550289] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.302 12:08:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.562 12:08:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:03.562 "name": "raid_bdev1", 00:24:03.562 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:03.562 "strip_size_kb": 0, 00:24:03.562 "state": "online", 00:24:03.562 "raid_level": "raid1", 00:24:03.562 "superblock": true, 00:24:03.562 "num_base_bdevs": 2, 00:24:03.562 "num_base_bdevs_discovered": 2, 00:24:03.562 "num_base_bdevs_operational": 2, 00:24:03.562 "process": { 00:24:03.562 "type": "rebuild", 00:24:03.562 "target": "spare", 00:24:03.562 "progress": { 00:24:03.562 "blocks": 14336, 00:24:03.563 "percent": 22 00:24:03.563 } 00:24:03.563 }, 00:24:03.563 "base_bdevs_list": [ 00:24:03.563 { 00:24:03.563 "name": "spare", 00:24:03.563 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:03.563 "is_configured": true, 00:24:03.563 "data_offset": 2048, 00:24:03.563 "data_size": 63488 00:24:03.563 }, 00:24:03.563 { 00:24:03.563 "name": "BaseBdev2", 00:24:03.563 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:03.563 "is_configured": true, 00:24:03.563 "data_offset": 2048, 00:24:03.563 "data_size": 63488 00:24:03.563 } 00:24:03.563 ] 00:24:03.563 }' 00:24:03.563 12:08:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:03.563 12:08:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.563 12:08:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:03.821 12:08:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.821 12:08:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:04.079 [2024-11-29 12:08:09.350709] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:04.079 [2024-11-29 12:08:09.351326] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:04.079 [2024-11-29 12:08:09.370802] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.079 [2024-11-29 12:08:09.461721] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:04.079 [2024-11-29 12:08:09.469846] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:04.079 [2024-11-29 12:08:09.571361] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:04.079 [2024-11-29 12:08:09.582054] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.338 [2024-11-29 12:08:09.598258] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.338 12:08:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.596 12:08:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.596 "name": "raid_bdev1", 00:24:04.596 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:04.596 "strip_size_kb": 0, 00:24:04.596 "state": "online", 00:24:04.596 "raid_level": "raid1", 00:24:04.596 "superblock": true, 00:24:04.596 "num_base_bdevs": 2, 00:24:04.596 "num_base_bdevs_discovered": 1, 00:24:04.596 "num_base_bdevs_operational": 1, 00:24:04.596 "base_bdevs_list": [ 00:24:04.596 { 00:24:04.596 "name": null, 00:24:04.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.597 "is_configured": false, 00:24:04.597 "data_offset": 2048, 00:24:04.597 "data_size": 63488 00:24:04.597 }, 00:24:04.597 { 00:24:04.597 "name": "BaseBdev2", 00:24:04.597 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:04.597 "is_configured": true, 00:24:04.597 "data_offset": 2048, 00:24:04.597 "data_size": 63488 00:24:04.597 } 00:24:04.597 ] 00:24:04.597 }' 00:24:04.597 12:08:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.597 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.164 12:08:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.423 12:08:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.423 "name": "raid_bdev1", 00:24:05.423 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:05.423 "strip_size_kb": 0, 00:24:05.423 "state": "online", 00:24:05.423 "raid_level": "raid1", 00:24:05.423 "superblock": true, 00:24:05.423 "num_base_bdevs": 2, 00:24:05.423 "num_base_bdevs_discovered": 1, 00:24:05.423 "num_base_bdevs_operational": 1, 00:24:05.423 "base_bdevs_list": [ 00:24:05.423 { 00:24:05.423 "name": null, 00:24:05.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.423 "is_configured": false, 00:24:05.423 "data_offset": 2048, 00:24:05.423 "data_size": 63488 00:24:05.423 }, 00:24:05.423 { 00:24:05.423 "name": "BaseBdev2", 00:24:05.423 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:05.423 "is_configured": true, 00:24:05.423 "data_offset": 2048, 00:24:05.423 "data_size": 63488 00:24:05.423 } 00:24:05.423 ] 00:24:05.423 }' 00:24:05.423 12:08:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.681 12:08:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:05.681 12:08:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.681 12:08:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:05.681 12:08:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:05.939 [2024-11-29 12:08:11.248157] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:05.939 [2024-11-29 12:08:11.248233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.939 12:08:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:05.939 [2024-11-29 12:08:11.322818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:05.939 [2024-11-29 12:08:11.325188] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:05.939 [2024-11-29 12:08:11.437951] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:05.939 [2024-11-29 12:08:11.438605] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:06.197 [2024-11-29 12:08:11.665766] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:06.197 [2024-11-29 12:08:11.666151] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:06.455 [2024-11-29 12:08:11.903584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:06.455 [2024-11-29 12:08:11.904211] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:06.713 [2024-11-29 12:08:12.123001] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:06.713 [2024-11-29 12:08:12.123370] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.972 12:08:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.972 [2024-11-29 12:08:12.484142] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:07.231 12:08:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.231 "name": "raid_bdev1", 00:24:07.231 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:07.231 "strip_size_kb": 0, 00:24:07.231 "state": "online", 00:24:07.231 "raid_level": "raid1", 00:24:07.231 "superblock": true, 00:24:07.231 "num_base_bdevs": 2, 00:24:07.231 "num_base_bdevs_discovered": 2, 00:24:07.231 "num_base_bdevs_operational": 2, 00:24:07.231 "process": { 00:24:07.231 "type": "rebuild", 00:24:07.231 "target": "spare", 00:24:07.231 "progress": { 00:24:07.231 "blocks": 16384, 00:24:07.232 "percent": 25 00:24:07.232 } 00:24:07.232 }, 00:24:07.232 "base_bdevs_list": [ 00:24:07.232 { 00:24:07.232 "name": "spare", 00:24:07.232 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:07.232 "is_configured": true, 00:24:07.232 "data_offset": 2048, 00:24:07.232 "data_size": 63488 00:24:07.232 }, 00:24:07.232 { 00:24:07.232 "name": "BaseBdev2", 00:24:07.232 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:07.232 "is_configured": true, 00:24:07.232 "data_offset": 2048, 00:24:07.232 "data_size": 63488 00:24:07.232 } 00:24:07.232 ] 00:24:07.232 }' 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:07.232 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@657 -- # local timeout=466 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.232 12:08:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.490 [2024-11-29 12:08:12.933406] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:07.490 12:08:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.490 "name": "raid_bdev1", 00:24:07.490 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:07.490 "strip_size_kb": 0, 00:24:07.490 "state": "online", 00:24:07.490 "raid_level": "raid1", 00:24:07.490 "superblock": true, 00:24:07.490 "num_base_bdevs": 2, 00:24:07.490 "num_base_bdevs_discovered": 2, 00:24:07.490 "num_base_bdevs_operational": 2, 00:24:07.490 "process": { 00:24:07.490 "type": "rebuild", 00:24:07.490 "target": "spare", 00:24:07.490 "progress": { 00:24:07.490 "blocks": 20480, 00:24:07.490 "percent": 32 00:24:07.490 } 00:24:07.490 }, 00:24:07.490 "base_bdevs_list": [ 00:24:07.490 { 00:24:07.490 "name": "spare", 00:24:07.490 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:07.490 "is_configured": true, 00:24:07.490 "data_offset": 2048, 00:24:07.490 "data_size": 63488 00:24:07.490 }, 00:24:07.490 { 00:24:07.490 "name": "BaseBdev2", 00:24:07.490 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:07.490 "is_configured": true, 00:24:07.490 "data_offset": 2048, 00:24:07.490 "data_size": 63488 00:24:07.490 } 00:24:07.490 ] 00:24:07.490 }' 00:24:07.490 12:08:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.749 12:08:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.749 12:08:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.749 12:08:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.749 12:08:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:08.315 [2024-11-29 12:08:13.778143] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.573 12:08:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.831 [2024-11-29 12:08:14.321795] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:09.090 12:08:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:09.090 "name": "raid_bdev1", 00:24:09.090 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:09.090 "strip_size_kb": 0, 00:24:09.090 "state": "online", 00:24:09.090 "raid_level": "raid1", 00:24:09.090 "superblock": true, 00:24:09.090 "num_base_bdevs": 2, 00:24:09.090 "num_base_bdevs_discovered": 2, 00:24:09.090 "num_base_bdevs_operational": 2, 00:24:09.090 "process": { 00:24:09.090 "type": "rebuild", 00:24:09.090 "target": "spare", 00:24:09.090 "progress": { 00:24:09.090 "blocks": 45056, 00:24:09.090 "percent": 70 00:24:09.090 } 00:24:09.090 }, 00:24:09.090 "base_bdevs_list": [ 00:24:09.090 { 00:24:09.090 "name": "spare", 00:24:09.090 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:09.090 "is_configured": true, 00:24:09.090 "data_offset": 2048, 00:24:09.090 "data_size": 63488 00:24:09.090 }, 00:24:09.090 { 00:24:09.090 "name": "BaseBdev2", 00:24:09.090 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:09.090 "is_configured": true, 00:24:09.090 "data_offset": 2048, 00:24:09.090 "data_size": 63488 00:24:09.090 } 00:24:09.090 ] 00:24:09.090 }' 00:24:09.090 12:08:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:09.090 12:08:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.090 12:08:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.090 12:08:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.090 12:08:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:09.090 [2024-11-29 12:08:14.557014] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.025 12:08:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.025 [2024-11-29 12:08:15.451478] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:10.284 [2024-11-29 12:08:15.551475] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:10.284 [2024-11-29 12:08:15.553973] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.284 12:08:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.284 "name": "raid_bdev1", 00:24:10.284 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:10.284 "strip_size_kb": 0, 00:24:10.284 "state": "online", 00:24:10.284 "raid_level": "raid1", 00:24:10.284 "superblock": true, 00:24:10.284 "num_base_bdevs": 2, 00:24:10.284 "num_base_bdevs_discovered": 2, 00:24:10.284 "num_base_bdevs_operational": 2, 00:24:10.284 "base_bdevs_list": [ 00:24:10.284 { 00:24:10.284 "name": "spare", 00:24:10.284 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:10.284 "is_configured": true, 00:24:10.284 "data_offset": 2048, 00:24:10.284 "data_size": 63488 00:24:10.284 }, 00:24:10.284 { 00:24:10.284 "name": "BaseBdev2", 00:24:10.284 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:10.284 "is_configured": true, 00:24:10.284 "data_offset": 2048, 00:24:10.284 "data_size": 63488 00:24:10.284 } 00:24:10.284 ] 00:24:10.284 }' 00:24:10.284 12:08:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.284 12:08:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:10.284 12:08:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@660 -- # break 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.543 12:08:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.802 "name": "raid_bdev1", 00:24:10.802 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:10.802 "strip_size_kb": 0, 00:24:10.802 "state": "online", 00:24:10.802 "raid_level": "raid1", 00:24:10.802 "superblock": true, 00:24:10.802 "num_base_bdevs": 2, 00:24:10.802 "num_base_bdevs_discovered": 2, 00:24:10.802 "num_base_bdevs_operational": 2, 00:24:10.802 "base_bdevs_list": [ 00:24:10.802 { 00:24:10.802 "name": "spare", 00:24:10.802 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:10.802 "is_configured": true, 00:24:10.802 "data_offset": 2048, 00:24:10.802 "data_size": 63488 00:24:10.802 }, 00:24:10.802 { 00:24:10.802 "name": "BaseBdev2", 00:24:10.802 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:10.802 "is_configured": true, 00:24:10.802 "data_offset": 2048, 00:24:10.802 "data_size": 63488 00:24:10.802 } 00:24:10.802 ] 00:24:10.802 }' 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.802 12:08:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.061 12:08:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:11.061 "name": "raid_bdev1", 00:24:11.061 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:11.061 "strip_size_kb": 0, 00:24:11.061 "state": "online", 00:24:11.061 "raid_level": "raid1", 00:24:11.061 "superblock": true, 00:24:11.061 "num_base_bdevs": 2, 00:24:11.061 "num_base_bdevs_discovered": 2, 00:24:11.061 "num_base_bdevs_operational": 2, 00:24:11.061 "base_bdevs_list": [ 00:24:11.061 { 00:24:11.061 "name": "spare", 00:24:11.061 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:11.061 "is_configured": true, 00:24:11.061 "data_offset": 2048, 00:24:11.061 "data_size": 63488 00:24:11.061 }, 00:24:11.061 { 00:24:11.061 "name": "BaseBdev2", 00:24:11.061 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:11.061 "is_configured": true, 00:24:11.061 "data_offset": 2048, 00:24:11.061 "data_size": 63488 00:24:11.061 } 00:24:11.061 ] 00:24:11.061 }' 00:24:11.061 12:08:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:11.061 12:08:16 -- common/autotest_common.sh@10 -- # set +x 00:24:11.629 12:08:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:11.887 [2024-11-29 12:08:17.292776] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:11.887 [2024-11-29 12:08:17.292838] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:11.887 00:24:11.887 Latency(us) 00:24:11.887 [2024-11-29T12:08:17.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.887 [2024-11-29T12:08:17.398Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:11.887 raid_bdev1 : 11.02 106.12 318.36 0.00 0.00 12547.80 344.44 110577.11 00:24:11.887 [2024-11-29T12:08:17.398Z] =================================================================================================================== 00:24:11.887 [2024-11-29T12:08:17.398Z] Total : 106.12 318.36 0.00 0.00 12547.80 344.44 110577.11 00:24:11.887 [2024-11-29 12:08:17.317580] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.887 [2024-11-29 12:08:17.317641] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.887 [2024-11-29 12:08:17.317762] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:11.887 [2024-11-29 12:08:17.317779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:24:11.887 0 00:24:11.887 12:08:17 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.887 12:08:17 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:12.145 12:08:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:12.145 12:08:17 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:24:12.145 12:08:17 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@12 -- # local i 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.145 12:08:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:12.403 /dev/nbd0 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.403 12:08:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:12.403 12:08:17 -- common/autotest_common.sh@867 -- # local i 00:24:12.403 12:08:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:12.403 12:08:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:12.403 12:08:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:12.403 12:08:17 -- common/autotest_common.sh@871 -- # break 00:24:12.403 12:08:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:12.403 12:08:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:12.403 12:08:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.403 1+0 records in 00:24:12.403 1+0 records out 00:24:12.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727051 s, 5.6 MB/s 00:24:12.403 12:08:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.403 12:08:17 -- common/autotest_common.sh@884 -- # size=4096 00:24:12.403 12:08:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.403 12:08:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:12.403 12:08:17 -- common/autotest_common.sh@887 -- # return 0 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.403 12:08:17 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:12.403 12:08:17 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:24:12.403 12:08:17 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@12 -- # local i 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.403 12:08:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:12.661 /dev/nbd1 00:24:12.661 12:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:12.661 12:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:12.662 12:08:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:12.662 12:08:18 -- common/autotest_common.sh@867 -- # local i 00:24:12.662 12:08:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:12.662 12:08:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:12.662 12:08:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:12.662 12:08:18 -- common/autotest_common.sh@871 -- # break 00:24:12.662 12:08:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:12.662 12:08:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:12.662 12:08:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.920 1+0 records in 00:24:12.920 1+0 records out 00:24:12.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003584 s, 11.4 MB/s 00:24:12.920 12:08:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.920 12:08:18 -- common/autotest_common.sh@884 -- # size=4096 00:24:12.920 12:08:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.920 12:08:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:12.920 12:08:18 -- common/autotest_common.sh@887 -- # return 0 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.920 12:08:18 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:12.920 12:08:18 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@51 -- # local i 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.920 12:08:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@41 -- # break 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@45 -- # return 0 00:24:13.177 12:08:18 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@51 -- # local i 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:13.177 12:08:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@41 -- # break 00:24:13.435 12:08:18 -- bdev/nbd_common.sh@45 -- # return 0 00:24:13.435 12:08:18 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:13.435 12:08:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:13.435 12:08:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:13.435 12:08:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:13.692 12:08:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:13.951 [2024-11-29 12:08:19.348192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:13.951 [2024-11-29 12:08:19.348322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.951 [2024-11-29 12:08:19.348368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:13.951 [2024-11-29 12:08:19.348402] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.951 [2024-11-29 12:08:19.351005] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.951 [2024-11-29 12:08:19.351087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:13.951 [2024-11-29 12:08:19.351197] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:13.951 [2024-11-29 12:08:19.351283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.951 BaseBdev1 00:24:13.951 12:08:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:13.951 12:08:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:13.951 12:08:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:14.209 12:08:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:14.468 [2024-11-29 12:08:19.852371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:14.468 [2024-11-29 12:08:19.852512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.468 [2024-11-29 12:08:19.852557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:14.468 [2024-11-29 12:08:19.852590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.468 [2024-11-29 12:08:19.853065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.468 [2024-11-29 12:08:19.853132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:14.468 [2024-11-29 12:08:19.853236] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:14.468 [2024-11-29 12:08:19.853253] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:14.468 [2024-11-29 12:08:19.853261] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:14.468 [2024-11-29 12:08:19.853292] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring 00:24:14.468 [2024-11-29 12:08:19.853351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.468 BaseBdev2 00:24:14.468 12:08:19 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:14.726 12:08:20 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:14.984 [2024-11-29 12:08:20.344525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:14.984 [2024-11-29 12:08:20.344642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.984 [2024-11-29 12:08:20.344700] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:14.984 [2024-11-29 12:08:20.344726] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.984 [2024-11-29 12:08:20.345265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.984 [2024-11-29 12:08:20.345326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:14.984 [2024-11-29 12:08:20.345440] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:14.984 [2024-11-29 12:08:20.345485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.984 spare 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.984 12:08:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.984 [2024-11-29 12:08:20.445621] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:24:14.984 [2024-11-29 12:08:20.445679] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:14.984 [2024-11-29 12:08:20.445880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:24:14.984 [2024-11-29 12:08:20.446436] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:24:14.984 [2024-11-29 12:08:20.446464] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:24:14.984 [2024-11-29 12:08:20.446625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.242 12:08:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:15.242 "name": "raid_bdev1", 00:24:15.242 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:15.242 "strip_size_kb": 0, 00:24:15.242 "state": "online", 00:24:15.242 "raid_level": "raid1", 00:24:15.242 "superblock": true, 00:24:15.242 "num_base_bdevs": 2, 00:24:15.242 "num_base_bdevs_discovered": 2, 00:24:15.242 "num_base_bdevs_operational": 2, 00:24:15.242 "base_bdevs_list": [ 00:24:15.242 { 00:24:15.242 "name": "spare", 00:24:15.242 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:15.242 "is_configured": true, 00:24:15.242 "data_offset": 2048, 00:24:15.242 "data_size": 63488 00:24:15.242 }, 00:24:15.242 { 00:24:15.242 "name": "BaseBdev2", 00:24:15.242 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:15.242 "is_configured": true, 00:24:15.242 "data_offset": 2048, 00:24:15.242 "data_size": 63488 00:24:15.242 } 00:24:15.242 ] 00:24:15.242 }' 00:24:15.242 12:08:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:15.242 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.810 12:08:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.068 12:08:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.068 "name": "raid_bdev1", 00:24:16.068 "uuid": "798ee2e5-de3a-4a53-a541-8a2f28fc469a", 00:24:16.068 "strip_size_kb": 0, 00:24:16.068 "state": "online", 00:24:16.068 "raid_level": "raid1", 00:24:16.068 "superblock": true, 00:24:16.068 "num_base_bdevs": 2, 00:24:16.068 "num_base_bdevs_discovered": 2, 00:24:16.068 "num_base_bdevs_operational": 2, 00:24:16.068 "base_bdevs_list": [ 00:24:16.068 { 00:24:16.068 "name": "spare", 00:24:16.068 "uuid": "c35b3984-e61c-590a-ae00-006a213993cd", 00:24:16.068 "is_configured": true, 00:24:16.068 "data_offset": 2048, 00:24:16.068 "data_size": 63488 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": "BaseBdev2", 00:24:16.068 "uuid": "9bd0c8c5-7128-5c67-906b-2334c8ca551f", 00:24:16.068 "is_configured": true, 00:24:16.068 "data_offset": 2048, 00:24:16.068 "data_size": 63488 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 }' 00:24:16.068 12:08:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.068 12:08:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:16.068 12:08:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.326 12:08:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:16.326 12:08:21 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.326 12:08:21 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:16.326 12:08:21 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.326 12:08:21 -- bdev/bdev_raid.sh@709 -- # killprocess 135470 00:24:16.326 12:08:21 -- common/autotest_common.sh@936 -- # '[' -z 135470 ']' 00:24:16.326 12:08:21 -- common/autotest_common.sh@940 -- # kill -0 135470 00:24:16.326 12:08:21 -- common/autotest_common.sh@941 -- # uname 00:24:16.326 12:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.584 12:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135470 00:24:16.584 12:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:16.584 12:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:16.584 12:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135470' 00:24:16.584 killing process with pid 135470 00:24:16.584 12:08:21 -- common/autotest_common.sh@955 -- # kill 135470 00:24:16.584 Received shutdown signal, test time was about 15.560290 seconds 00:24:16.584 00:24:16.584 Latency(us) 00:24:16.584 [2024-11-29T12:08:22.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.584 [2024-11-29T12:08:22.095Z] =================================================================================================================== 00:24:16.584 [2024-11-29T12:08:22.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.584 [2024-11-29 12:08:21.857169] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.584 [2024-11-29 12:08:21.857292] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.584 [2024-11-29 12:08:21.857378] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.584 [2024-11-29 12:08:21.857394] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:16.584 12:08:21 -- common/autotest_common.sh@960 -- # wait 135470 00:24:16.584 [2024-11-29 12:08:21.889712] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:16.842 12:08:22 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:16.842 00:24:16.842 real 0m20.447s 00:24:16.842 user 0m33.850s 00:24:16.842 sys 0m2.473s 00:24:16.842 12:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.842 12:08:22 -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 ************************************ 00:24:16.842 END TEST raid_rebuild_test_sb_io 00:24:16.842 ************************************ 00:24:16.842 12:08:22 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:24:16.842 12:08:22 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:24:16.842 12:08:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:16.842 12:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.842 12:08:22 -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 ************************************ 00:24:16.843 START TEST raid_rebuild_test 00:24:16.843 ************************************ 00:24:16.843 12:08:22 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@544 -- # raid_pid=136017 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:16.843 12:08:22 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136017 /var/tmp/spdk-raid.sock 00:24:16.843 12:08:22 -- common/autotest_common.sh@829 -- # '[' -z 136017 ']' 00:24:16.843 12:08:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:16.843 12:08:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.843 12:08:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:16.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:16.843 12:08:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.843 12:08:22 -- common/autotest_common.sh@10 -- # set +x 00:24:16.843 [2024-11-29 12:08:22.275735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:16.843 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:16.843 Zero copy mechanism will not be used. 00:24:16.843 [2024-11-29 12:08:22.275941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136017 ] 00:24:17.101 [2024-11-29 12:08:22.418535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.101 [2024-11-29 12:08:22.513969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.101 [2024-11-29 12:08:22.568977] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.034 12:08:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.034 12:08:23 -- common/autotest_common.sh@862 -- # return 0 00:24:18.034 12:08:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:18.034 12:08:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:18.034 12:08:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:18.034 BaseBdev1 00:24:18.293 12:08:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:18.293 12:08:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:18.293 12:08:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:18.293 BaseBdev2 00:24:18.293 12:08:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:18.293 12:08:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:18.293 12:08:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:18.858 BaseBdev3 00:24:18.858 12:08:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:18.858 12:08:24 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:18.858 12:08:24 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:18.858 BaseBdev4 00:24:18.858 12:08:24 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:19.115 spare_malloc 00:24:19.115 12:08:24 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:19.373 spare_delay 00:24:19.373 12:08:24 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:19.634 [2024-11-29 12:08:25.036433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:19.634 [2024-11-29 12:08:25.036574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.634 [2024-11-29 12:08:25.036625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:19.634 [2024-11-29 12:08:25.036674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.634 [2024-11-29 12:08:25.039562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.634 [2024-11-29 12:08:25.039632] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:19.634 spare 00:24:19.634 12:08:25 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:19.892 [2024-11-29 12:08:25.332600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.892 [2024-11-29 12:08:25.334966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:19.892 [2024-11-29 12:08:25.335048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:19.892 [2024-11-29 12:08:25.335093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:19.892 [2024-11-29 12:08:25.335205] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:24:19.892 [2024-11-29 12:08:25.335220] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:19.892 [2024-11-29 12:08:25.335397] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:24:19.892 [2024-11-29 12:08:25.335880] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:24:19.892 [2024-11-29 12:08:25.335904] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:24:19.892 [2024-11-29 12:08:25.336128] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.892 12:08:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.150 12:08:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.150 "name": "raid_bdev1", 00:24:20.150 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:20.150 "strip_size_kb": 0, 00:24:20.150 "state": "online", 00:24:20.150 "raid_level": "raid1", 00:24:20.150 "superblock": false, 00:24:20.150 "num_base_bdevs": 4, 00:24:20.150 "num_base_bdevs_discovered": 4, 00:24:20.150 "num_base_bdevs_operational": 4, 00:24:20.150 "base_bdevs_list": [ 00:24:20.150 { 00:24:20.150 "name": "BaseBdev1", 00:24:20.150 "uuid": "eefc8785-91e9-4b27-a073-788622c0ff0a", 00:24:20.150 "is_configured": true, 00:24:20.150 "data_offset": 0, 00:24:20.150 "data_size": 65536 00:24:20.150 }, 00:24:20.150 { 00:24:20.150 "name": "BaseBdev2", 00:24:20.150 "uuid": "869fbc0d-d3a6-43ac-8cc7-3ca6bde7dffe", 00:24:20.150 "is_configured": true, 00:24:20.150 "data_offset": 0, 00:24:20.150 "data_size": 65536 00:24:20.150 }, 00:24:20.150 { 00:24:20.150 "name": "BaseBdev3", 00:24:20.150 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:20.150 "is_configured": true, 00:24:20.150 "data_offset": 0, 00:24:20.150 "data_size": 65536 00:24:20.150 }, 00:24:20.150 { 00:24:20.150 "name": "BaseBdev4", 00:24:20.150 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:20.150 "is_configured": true, 00:24:20.150 "data_offset": 0, 00:24:20.150 "data_size": 65536 00:24:20.150 } 00:24:20.150 ] 00:24:20.150 }' 00:24:20.150 12:08:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.150 12:08:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.087 12:08:26 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:21.087 12:08:26 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:21.087 [2024-11-29 12:08:26.464982] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:21.087 12:08:26 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:24:21.087 12:08:26 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:21.087 12:08:26 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.345 12:08:26 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:21.345 12:08:26 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:21.345 12:08:26 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:21.345 12:08:26 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@12 -- # local i 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:21.345 12:08:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:21.604 [2024-11-29 12:08:26.972944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:21.604 /dev/nbd0 00:24:21.604 12:08:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:21.604 12:08:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:21.604 12:08:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:21.604 12:08:27 -- common/autotest_common.sh@867 -- # local i 00:24:21.604 12:08:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:21.604 12:08:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:21.604 12:08:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:21.604 12:08:27 -- common/autotest_common.sh@871 -- # break 00:24:21.604 12:08:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:21.604 12:08:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:21.604 12:08:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:21.604 1+0 records in 00:24:21.604 1+0 records out 00:24:21.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346102 s, 11.8 MB/s 00:24:21.604 12:08:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.604 12:08:27 -- common/autotest_common.sh@884 -- # size=4096 00:24:21.604 12:08:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.604 12:08:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:21.604 12:08:27 -- common/autotest_common.sh@887 -- # return 0 00:24:21.604 12:08:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:21.604 12:08:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:21.604 12:08:27 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:21.604 12:08:27 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:21.604 12:08:27 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:28.231 65536+0 records in 00:24:28.231 65536+0 records out 00:24:28.231 33554432 bytes (34 MB, 32 MiB) copied, 6.33563 s, 5.3 MB/s 00:24:28.231 12:08:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@51 -- # local i 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:28.231 [2024-11-29 12:08:33.638507] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@41 -- # break 00:24:28.231 12:08:33 -- bdev/nbd_common.sh@45 -- # return 0 00:24:28.231 12:08:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:28.491 [2024-11-29 12:08:33.858544] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.491 12:08:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.750 12:08:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.750 "name": "raid_bdev1", 00:24:28.750 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:28.750 "strip_size_kb": 0, 00:24:28.750 "state": "online", 00:24:28.750 "raid_level": "raid1", 00:24:28.750 "superblock": false, 00:24:28.750 "num_base_bdevs": 4, 00:24:28.750 "num_base_bdevs_discovered": 3, 00:24:28.750 "num_base_bdevs_operational": 3, 00:24:28.750 "base_bdevs_list": [ 00:24:28.750 { 00:24:28.750 "name": null, 00:24:28.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.750 "is_configured": false, 00:24:28.750 "data_offset": 0, 00:24:28.750 "data_size": 65536 00:24:28.750 }, 00:24:28.750 { 00:24:28.750 "name": "BaseBdev2", 00:24:28.750 "uuid": "869fbc0d-d3a6-43ac-8cc7-3ca6bde7dffe", 00:24:28.750 "is_configured": true, 00:24:28.750 "data_offset": 0, 00:24:28.750 "data_size": 65536 00:24:28.750 }, 00:24:28.750 { 00:24:28.750 "name": "BaseBdev3", 00:24:28.750 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:28.750 "is_configured": true, 00:24:28.750 "data_offset": 0, 00:24:28.750 "data_size": 65536 00:24:28.750 }, 00:24:28.750 { 00:24:28.750 "name": "BaseBdev4", 00:24:28.750 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:28.750 "is_configured": true, 00:24:28.750 "data_offset": 0, 00:24:28.750 "data_size": 65536 00:24:28.750 } 00:24:28.750 ] 00:24:28.750 }' 00:24:28.750 12:08:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.750 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:24:29.316 12:08:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:29.574 [2024-11-29 12:08:34.998928] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:29.574 [2024-11-29 12:08:34.999016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:29.574 [2024-11-29 12:08:35.003689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:24:29.574 [2024-11-29 12:08:35.006036] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:29.574 12:08:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:30.509 12:08:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:30.509 12:08:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:30.509 12:08:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:30.767 12:08:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:30.767 12:08:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:30.767 12:08:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.767 12:08:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.025 12:08:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:31.025 "name": "raid_bdev1", 00:24:31.025 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:31.025 "strip_size_kb": 0, 00:24:31.025 "state": "online", 00:24:31.025 "raid_level": "raid1", 00:24:31.025 "superblock": false, 00:24:31.025 "num_base_bdevs": 4, 00:24:31.025 "num_base_bdevs_discovered": 4, 00:24:31.025 "num_base_bdevs_operational": 4, 00:24:31.025 "process": { 00:24:31.025 "type": "rebuild", 00:24:31.025 "target": "spare", 00:24:31.025 "progress": { 00:24:31.025 "blocks": 24576, 00:24:31.025 "percent": 37 00:24:31.025 } 00:24:31.025 }, 00:24:31.025 "base_bdevs_list": [ 00:24:31.025 { 00:24:31.025 "name": "spare", 00:24:31.025 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:31.025 "is_configured": true, 00:24:31.025 "data_offset": 0, 00:24:31.025 "data_size": 65536 00:24:31.025 }, 00:24:31.025 { 00:24:31.025 "name": "BaseBdev2", 00:24:31.025 "uuid": "869fbc0d-d3a6-43ac-8cc7-3ca6bde7dffe", 00:24:31.025 "is_configured": true, 00:24:31.025 "data_offset": 0, 00:24:31.025 "data_size": 65536 00:24:31.025 }, 00:24:31.025 { 00:24:31.025 "name": "BaseBdev3", 00:24:31.025 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:31.025 "is_configured": true, 00:24:31.025 "data_offset": 0, 00:24:31.025 "data_size": 65536 00:24:31.025 }, 00:24:31.025 { 00:24:31.025 "name": "BaseBdev4", 00:24:31.025 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:31.025 "is_configured": true, 00:24:31.025 "data_offset": 0, 00:24:31.025 "data_size": 65536 00:24:31.025 } 00:24:31.025 ] 00:24:31.025 }' 00:24:31.025 12:08:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:31.025 12:08:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.025 12:08:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:31.025 12:08:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.025 12:08:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:31.283 [2024-11-29 12:08:36.652384] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:31.283 [2024-11-29 12:08:36.718314] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:31.283 [2024-11-29 12:08:36.718513] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.283 12:08:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.542 12:08:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.542 "name": "raid_bdev1", 00:24:31.542 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:31.542 "strip_size_kb": 0, 00:24:31.542 "state": "online", 00:24:31.542 "raid_level": "raid1", 00:24:31.542 "superblock": false, 00:24:31.542 "num_base_bdevs": 4, 00:24:31.542 "num_base_bdevs_discovered": 3, 00:24:31.542 "num_base_bdevs_operational": 3, 00:24:31.542 "base_bdevs_list": [ 00:24:31.542 { 00:24:31.542 "name": null, 00:24:31.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.542 "is_configured": false, 00:24:31.542 "data_offset": 0, 00:24:31.542 "data_size": 65536 00:24:31.542 }, 00:24:31.542 { 00:24:31.542 "name": "BaseBdev2", 00:24:31.542 "uuid": "869fbc0d-d3a6-43ac-8cc7-3ca6bde7dffe", 00:24:31.542 "is_configured": true, 00:24:31.542 "data_offset": 0, 00:24:31.542 "data_size": 65536 00:24:31.542 }, 00:24:31.542 { 00:24:31.542 "name": "BaseBdev3", 00:24:31.542 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:31.542 "is_configured": true, 00:24:31.542 "data_offset": 0, 00:24:31.542 "data_size": 65536 00:24:31.542 }, 00:24:31.542 { 00:24:31.542 "name": "BaseBdev4", 00:24:31.542 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:31.543 "is_configured": true, 00:24:31.543 "data_offset": 0, 00:24:31.543 "data_size": 65536 00:24:31.543 } 00:24:31.543 ] 00:24:31.543 }' 00:24:31.543 12:08:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.543 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:32.478 "name": "raid_bdev1", 00:24:32.478 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:32.478 "strip_size_kb": 0, 00:24:32.478 "state": "online", 00:24:32.478 "raid_level": "raid1", 00:24:32.478 "superblock": false, 00:24:32.478 "num_base_bdevs": 4, 00:24:32.478 "num_base_bdevs_discovered": 3, 00:24:32.478 "num_base_bdevs_operational": 3, 00:24:32.478 "base_bdevs_list": [ 00:24:32.478 { 00:24:32.478 "name": null, 00:24:32.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.478 "is_configured": false, 00:24:32.478 "data_offset": 0, 00:24:32.478 "data_size": 65536 00:24:32.478 }, 00:24:32.478 { 00:24:32.478 "name": "BaseBdev2", 00:24:32.478 "uuid": "869fbc0d-d3a6-43ac-8cc7-3ca6bde7dffe", 00:24:32.478 "is_configured": true, 00:24:32.478 "data_offset": 0, 00:24:32.478 "data_size": 65536 00:24:32.478 }, 00:24:32.478 { 00:24:32.478 "name": "BaseBdev3", 00:24:32.478 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:32.478 "is_configured": true, 00:24:32.478 "data_offset": 0, 00:24:32.478 "data_size": 65536 00:24:32.478 }, 00:24:32.478 { 00:24:32.478 "name": "BaseBdev4", 00:24:32.478 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:32.478 "is_configured": true, 00:24:32.478 "data_offset": 0, 00:24:32.478 "data_size": 65536 00:24:32.478 } 00:24:32.478 ] 00:24:32.478 }' 00:24:32.478 12:08:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:32.743 12:08:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:32.743 12:08:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:32.743 12:08:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:32.743 12:08:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:33.001 [2024-11-29 12:08:38.323476] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:33.001 [2024-11-29 12:08:38.323534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:33.001 [2024-11-29 12:08:38.328079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:24:33.001 [2024-11-29 12:08:38.330386] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:33.001 12:08:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.936 12:08:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.194 12:08:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:34.194 "name": "raid_bdev1", 00:24:34.194 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:34.194 "strip_size_kb": 0, 00:24:34.194 "state": "online", 00:24:34.194 "raid_level": "raid1", 00:24:34.194 "superblock": false, 00:24:34.194 "num_base_bdevs": 4, 00:24:34.194 "num_base_bdevs_discovered": 4, 00:24:34.194 "num_base_bdevs_operational": 4, 00:24:34.194 "process": { 00:24:34.194 "type": "rebuild", 00:24:34.194 "target": "spare", 00:24:34.194 "progress": { 00:24:34.194 "blocks": 24576, 00:24:34.194 "percent": 37 00:24:34.194 } 00:24:34.194 }, 00:24:34.194 "base_bdevs_list": [ 00:24:34.194 { 00:24:34.194 "name": "spare", 00:24:34.194 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:34.194 "is_configured": true, 00:24:34.194 "data_offset": 0, 00:24:34.194 "data_size": 65536 00:24:34.194 }, 00:24:34.194 { 00:24:34.194 "name": "BaseBdev2", 00:24:34.194 "uuid": "869fbc0d-d3a6-43ac-8cc7-3ca6bde7dffe", 00:24:34.194 "is_configured": true, 00:24:34.194 "data_offset": 0, 00:24:34.194 "data_size": 65536 00:24:34.194 }, 00:24:34.194 { 00:24:34.194 "name": "BaseBdev3", 00:24:34.194 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:34.194 "is_configured": true, 00:24:34.194 "data_offset": 0, 00:24:34.194 "data_size": 65536 00:24:34.194 }, 00:24:34.194 { 00:24:34.194 "name": "BaseBdev4", 00:24:34.194 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:34.194 "is_configured": true, 00:24:34.194 "data_offset": 0, 00:24:34.194 "data_size": 65536 00:24:34.194 } 00:24:34.194 ] 00:24:34.194 }' 00:24:34.194 12:08:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:34.194 12:08:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.194 12:08:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:34.452 12:08:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:34.452 12:08:39 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:34.452 12:08:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:34.452 12:08:39 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:34.452 12:08:39 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:34.452 12:08:39 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:34.453 [2024-11-29 12:08:39.961397] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:34.711 [2024-11-29 12:08:40.041389] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06220 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.712 12:08:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.970 12:08:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:34.970 "name": "raid_bdev1", 00:24:34.970 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:34.971 "strip_size_kb": 0, 00:24:34.971 "state": "online", 00:24:34.971 "raid_level": "raid1", 00:24:34.971 "superblock": false, 00:24:34.971 "num_base_bdevs": 4, 00:24:34.971 "num_base_bdevs_discovered": 3, 00:24:34.971 "num_base_bdevs_operational": 3, 00:24:34.971 "process": { 00:24:34.971 "type": "rebuild", 00:24:34.971 "target": "spare", 00:24:34.971 "progress": { 00:24:34.971 "blocks": 38912, 00:24:34.971 "percent": 59 00:24:34.971 } 00:24:34.971 }, 00:24:34.971 "base_bdevs_list": [ 00:24:34.971 { 00:24:34.971 "name": "spare", 00:24:34.971 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:34.971 "is_configured": true, 00:24:34.971 "data_offset": 0, 00:24:34.971 "data_size": 65536 00:24:34.971 }, 00:24:34.971 { 00:24:34.971 "name": null, 00:24:34.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.971 "is_configured": false, 00:24:34.971 "data_offset": 0, 00:24:34.971 "data_size": 65536 00:24:34.971 }, 00:24:34.971 { 00:24:34.971 "name": "BaseBdev3", 00:24:34.971 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:34.971 "is_configured": true, 00:24:34.971 "data_offset": 0, 00:24:34.971 "data_size": 65536 00:24:34.971 }, 00:24:34.971 { 00:24:34.971 "name": "BaseBdev4", 00:24:34.971 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:34.971 "is_configured": true, 00:24:34.971 "data_offset": 0, 00:24:34.971 "data_size": 65536 00:24:34.971 } 00:24:34.971 ] 00:24:34.971 }' 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@657 -- # local timeout=494 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.971 12:08:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.230 12:08:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.230 "name": "raid_bdev1", 00:24:35.230 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:35.230 "strip_size_kb": 0, 00:24:35.230 "state": "online", 00:24:35.230 "raid_level": "raid1", 00:24:35.230 "superblock": false, 00:24:35.230 "num_base_bdevs": 4, 00:24:35.230 "num_base_bdevs_discovered": 3, 00:24:35.230 "num_base_bdevs_operational": 3, 00:24:35.230 "process": { 00:24:35.230 "type": "rebuild", 00:24:35.230 "target": "spare", 00:24:35.230 "progress": { 00:24:35.230 "blocks": 45056, 00:24:35.230 "percent": 68 00:24:35.230 } 00:24:35.230 }, 00:24:35.230 "base_bdevs_list": [ 00:24:35.230 { 00:24:35.230 "name": "spare", 00:24:35.230 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:35.230 "is_configured": true, 00:24:35.230 "data_offset": 0, 00:24:35.230 "data_size": 65536 00:24:35.230 }, 00:24:35.230 { 00:24:35.230 "name": null, 00:24:35.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.230 "is_configured": false, 00:24:35.230 "data_offset": 0, 00:24:35.230 "data_size": 65536 00:24:35.230 }, 00:24:35.230 { 00:24:35.230 "name": "BaseBdev3", 00:24:35.230 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:35.230 "is_configured": true, 00:24:35.230 "data_offset": 0, 00:24:35.230 "data_size": 65536 00:24:35.230 }, 00:24:35.230 { 00:24:35.230 "name": "BaseBdev4", 00:24:35.230 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:35.230 "is_configured": true, 00:24:35.230 "data_offset": 0, 00:24:35.230 "data_size": 65536 00:24:35.230 } 00:24:35.230 ] 00:24:35.230 }' 00:24:35.230 12:08:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.230 12:08:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.230 12:08:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.230 12:08:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.230 12:08:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:36.166 [2024-11-29 12:08:41.551758] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:36.166 [2024-11-29 12:08:41.551878] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:36.166 [2024-11-29 12:08:41.551993] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.424 12:08:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.681 12:08:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:36.681 "name": "raid_bdev1", 00:24:36.681 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:36.681 "strip_size_kb": 0, 00:24:36.681 "state": "online", 00:24:36.681 "raid_level": "raid1", 00:24:36.681 "superblock": false, 00:24:36.681 "num_base_bdevs": 4, 00:24:36.681 "num_base_bdevs_discovered": 3, 00:24:36.681 "num_base_bdevs_operational": 3, 00:24:36.681 "base_bdevs_list": [ 00:24:36.681 { 00:24:36.681 "name": "spare", 00:24:36.681 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:36.681 "is_configured": true, 00:24:36.681 "data_offset": 0, 00:24:36.681 "data_size": 65536 00:24:36.681 }, 00:24:36.681 { 00:24:36.681 "name": null, 00:24:36.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.681 "is_configured": false, 00:24:36.681 "data_offset": 0, 00:24:36.681 "data_size": 65536 00:24:36.682 }, 00:24:36.682 { 00:24:36.682 "name": "BaseBdev3", 00:24:36.682 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:36.682 "is_configured": true, 00:24:36.682 "data_offset": 0, 00:24:36.682 "data_size": 65536 00:24:36.682 }, 00:24:36.682 { 00:24:36.682 "name": "BaseBdev4", 00:24:36.682 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:36.682 "is_configured": true, 00:24:36.682 "data_offset": 0, 00:24:36.682 "data_size": 65536 00:24:36.682 } 00:24:36.682 ] 00:24:36.682 }' 00:24:36.682 12:08:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@660 -- # break 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.682 12:08:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:36.940 "name": "raid_bdev1", 00:24:36.940 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:36.940 "strip_size_kb": 0, 00:24:36.940 "state": "online", 00:24:36.940 "raid_level": "raid1", 00:24:36.940 "superblock": false, 00:24:36.940 "num_base_bdevs": 4, 00:24:36.940 "num_base_bdevs_discovered": 3, 00:24:36.940 "num_base_bdevs_operational": 3, 00:24:36.940 "base_bdevs_list": [ 00:24:36.940 { 00:24:36.940 "name": "spare", 00:24:36.940 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:36.940 "is_configured": true, 00:24:36.940 "data_offset": 0, 00:24:36.940 "data_size": 65536 00:24:36.940 }, 00:24:36.940 { 00:24:36.940 "name": null, 00:24:36.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.940 "is_configured": false, 00:24:36.940 "data_offset": 0, 00:24:36.940 "data_size": 65536 00:24:36.940 }, 00:24:36.940 { 00:24:36.940 "name": "BaseBdev3", 00:24:36.940 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:36.940 "is_configured": true, 00:24:36.940 "data_offset": 0, 00:24:36.940 "data_size": 65536 00:24:36.940 }, 00:24:36.940 { 00:24:36.940 "name": "BaseBdev4", 00:24:36.940 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:36.940 "is_configured": true, 00:24:36.940 "data_offset": 0, 00:24:36.940 "data_size": 65536 00:24:36.940 } 00:24:36.940 ] 00:24:36.940 }' 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.940 12:08:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.199 12:08:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:37.199 "name": "raid_bdev1", 00:24:37.199 "uuid": "8552ccef-1d72-4419-8382-cd0ea283e3a4", 00:24:37.199 "strip_size_kb": 0, 00:24:37.199 "state": "online", 00:24:37.199 "raid_level": "raid1", 00:24:37.199 "superblock": false, 00:24:37.199 "num_base_bdevs": 4, 00:24:37.199 "num_base_bdevs_discovered": 3, 00:24:37.199 "num_base_bdevs_operational": 3, 00:24:37.199 "base_bdevs_list": [ 00:24:37.199 { 00:24:37.199 "name": "spare", 00:24:37.199 "uuid": "032dac28-a961-52c3-bdae-9fa1613225c7", 00:24:37.199 "is_configured": true, 00:24:37.199 "data_offset": 0, 00:24:37.199 "data_size": 65536 00:24:37.199 }, 00:24:37.199 { 00:24:37.199 "name": null, 00:24:37.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.199 "is_configured": false, 00:24:37.199 "data_offset": 0, 00:24:37.199 "data_size": 65536 00:24:37.199 }, 00:24:37.199 { 00:24:37.199 "name": "BaseBdev3", 00:24:37.199 "uuid": "958a016c-97e9-4fd3-a070-55d20e68a63e", 00:24:37.199 "is_configured": true, 00:24:37.199 "data_offset": 0, 00:24:37.199 "data_size": 65536 00:24:37.199 }, 00:24:37.199 { 00:24:37.199 "name": "BaseBdev4", 00:24:37.199 "uuid": "7144b45f-5dba-42ed-9a53-996cd3d86278", 00:24:37.199 "is_configured": true, 00:24:37.199 "data_offset": 0, 00:24:37.199 "data_size": 65536 00:24:37.199 } 00:24:37.199 ] 00:24:37.199 }' 00:24:37.199 12:08:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:37.199 12:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:38.134 12:08:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:38.134 [2024-11-29 12:08:43.516945] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.134 [2024-11-29 12:08:43.516997] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:38.134 [2024-11-29 12:08:43.517120] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.134 [2024-11-29 12:08:43.517219] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.134 [2024-11-29 12:08:43.517235] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:24:38.134 12:08:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.134 12:08:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:38.392 12:08:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:38.392 12:08:43 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:38.392 12:08:43 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@12 -- # local i 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:38.392 12:08:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:38.650 /dev/nbd0 00:24:38.650 12:08:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:38.650 12:08:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:38.650 12:08:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:38.650 12:08:44 -- common/autotest_common.sh@867 -- # local i 00:24:38.650 12:08:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:38.650 12:08:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:38.650 12:08:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:38.650 12:08:44 -- common/autotest_common.sh@871 -- # break 00:24:38.650 12:08:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:38.650 12:08:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:38.650 12:08:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:38.650 1+0 records in 00:24:38.650 1+0 records out 00:24:38.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379888 s, 10.8 MB/s 00:24:38.650 12:08:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.650 12:08:44 -- common/autotest_common.sh@884 -- # size=4096 00:24:38.650 12:08:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.650 12:08:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:38.650 12:08:44 -- common/autotest_common.sh@887 -- # return 0 00:24:38.650 12:08:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:38.650 12:08:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:38.650 12:08:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:38.910 /dev/nbd1 00:24:38.910 12:08:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:38.910 12:08:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:38.910 12:08:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:38.910 12:08:44 -- common/autotest_common.sh@867 -- # local i 00:24:38.910 12:08:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:38.910 12:08:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:38.910 12:08:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:38.910 12:08:44 -- common/autotest_common.sh@871 -- # break 00:24:38.910 12:08:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:38.910 12:08:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:38.910 12:08:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:38.910 1+0 records in 00:24:38.910 1+0 records out 00:24:38.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381758 s, 10.7 MB/s 00:24:38.910 12:08:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.910 12:08:44 -- common/autotest_common.sh@884 -- # size=4096 00:24:38.910 12:08:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.910 12:08:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:38.910 12:08:44 -- common/autotest_common.sh@887 -- # return 0 00:24:38.910 12:08:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:38.910 12:08:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:38.910 12:08:44 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:39.169 12:08:44 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:39.169 12:08:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:39.169 12:08:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:39.169 12:08:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.169 12:08:44 -- bdev/nbd_common.sh@51 -- # local i 00:24:39.169 12:08:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.169 12:08:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@41 -- # break 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.428 12:08:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@41 -- # break 00:24:39.687 12:08:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.687 12:08:45 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:39.687 12:08:45 -- bdev/bdev_raid.sh@709 -- # killprocess 136017 00:24:39.687 12:08:45 -- common/autotest_common.sh@936 -- # '[' -z 136017 ']' 00:24:39.687 12:08:45 -- common/autotest_common.sh@940 -- # kill -0 136017 00:24:39.687 12:08:45 -- common/autotest_common.sh@941 -- # uname 00:24:39.687 12:08:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:39.687 12:08:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136017 00:24:39.687 12:08:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:39.687 12:08:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:39.687 12:08:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136017' 00:24:39.687 killing process with pid 136017 00:24:39.687 12:08:45 -- common/autotest_common.sh@955 -- # kill 136017 00:24:39.687 Received shutdown signal, test time was about 60.000000 seconds 00:24:39.687 00:24:39.687 Latency(us) 00:24:39.687 [2024-11-29T12:08:45.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.687 [2024-11-29T12:08:45.198Z] =================================================================================================================== 00:24:39.687 [2024-11-29T12:08:45.198Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:39.687 12:08:45 -- common/autotest_common.sh@960 -- # wait 136017 00:24:39.687 [2024-11-29 12:08:45.061153] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:39.687 [2024-11-29 12:08:45.125730] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:39.957 00:24:39.957 real 0m23.173s 00:24:39.957 user 0m32.439s 00:24:39.957 sys 0m4.419s 00:24:39.957 12:08:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:39.957 12:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:39.957 ************************************ 00:24:39.957 END TEST raid_rebuild_test 00:24:39.957 ************************************ 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:24:39.957 12:08:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:39.957 12:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:39.957 12:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:39.957 ************************************ 00:24:39.957 START TEST raid_rebuild_test_sb 00:24:39.957 ************************************ 00:24:39.957 12:08:45 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=136581 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:39.957 12:08:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136581 /var/tmp/spdk-raid.sock 00:24:39.957 12:08:45 -- common/autotest_common.sh@829 -- # '[' -z 136581 ']' 00:24:39.957 12:08:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:39.957 12:08:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.957 12:08:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:39.957 12:08:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.957 12:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:40.227 [2024-11-29 12:08:45.500225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:40.227 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:40.227 Zero copy mechanism will not be used. 00:24:40.227 [2024-11-29 12:08:45.500438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136581 ] 00:24:40.227 [2024-11-29 12:08:45.645167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.485 [2024-11-29 12:08:45.743136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.485 [2024-11-29 12:08:45.798781] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:41.052 12:08:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.052 12:08:46 -- common/autotest_common.sh@862 -- # return 0 00:24:41.052 12:08:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:41.052 12:08:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:41.052 12:08:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:41.311 BaseBdev1_malloc 00:24:41.311 12:08:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:41.570 [2024-11-29 12:08:46.932410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:41.570 [2024-11-29 12:08:46.932812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.570 [2024-11-29 12:08:46.932906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:24:41.570 [2024-11-29 12:08:46.933160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.570 [2024-11-29 12:08:46.936008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.570 [2024-11-29 12:08:46.936206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:41.570 BaseBdev1 00:24:41.570 12:08:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:41.570 12:08:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:41.570 12:08:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:41.829 BaseBdev2_malloc 00:24:41.829 12:08:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:42.087 [2024-11-29 12:08:47.412265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:42.087 [2024-11-29 12:08:47.412664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.087 [2024-11-29 12:08:47.412756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:42.087 [2024-11-29 12:08:47.413014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.087 [2024-11-29 12:08:47.415730] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.087 [2024-11-29 12:08:47.415930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:42.087 BaseBdev2 00:24:42.087 12:08:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:42.087 12:08:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:42.087 12:08:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:42.345 BaseBdev3_malloc 00:24:42.345 12:08:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:42.605 [2024-11-29 12:08:47.914716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:42.605 [2024-11-29 12:08:47.915088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.605 [2024-11-29 12:08:47.915184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:42.605 [2024-11-29 12:08:47.915363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.605 [2024-11-29 12:08:47.918081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.605 [2024-11-29 12:08:47.918393] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:42.605 BaseBdev3 00:24:42.605 12:08:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:42.605 12:08:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:42.605 12:08:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:42.862 BaseBdev4_malloc 00:24:42.862 12:08:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:43.120 [2024-11-29 12:08:48.434896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:43.120 [2024-11-29 12:08:48.435321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.120 [2024-11-29 12:08:48.435412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:43.120 [2024-11-29 12:08:48.435585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.120 [2024-11-29 12:08:48.438247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.120 [2024-11-29 12:08:48.438491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:43.120 BaseBdev4 00:24:43.120 12:08:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:43.377 spare_malloc 00:24:43.377 12:08:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:43.635 spare_delay 00:24:43.635 12:08:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:43.635 [2024-11-29 12:08:49.149461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:43.893 [2024-11-29 12:08:49.149755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.893 [2024-11-29 12:08:49.149846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:43.893 [2024-11-29 12:08:49.150090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.893 [2024-11-29 12:08:49.152858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.893 [2024-11-29 12:08:49.153078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:43.893 spare 00:24:43.893 12:08:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:43.893 [2024-11-29 12:08:49.385659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.893 [2024-11-29 12:08:49.388333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:43.893 [2024-11-29 12:08:49.388555] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.893 [2024-11-29 12:08:49.388664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:43.893 [2024-11-29 12:08:49.389078] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:43.893 [2024-11-29 12:08:49.389134] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:43.893 [2024-11-29 12:08:49.389436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:43.893 [2024-11-29 12:08:49.390061] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:43.893 [2024-11-29 12:08:49.390202] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:43.893 [2024-11-29 12:08:49.390566] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.150 12:08:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.150 "name": "raid_bdev1", 00:24:44.150 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:44.150 "strip_size_kb": 0, 00:24:44.150 "state": "online", 00:24:44.150 "raid_level": "raid1", 00:24:44.150 "superblock": true, 00:24:44.150 "num_base_bdevs": 4, 00:24:44.150 "num_base_bdevs_discovered": 4, 00:24:44.150 "num_base_bdevs_operational": 4, 00:24:44.150 "base_bdevs_list": [ 00:24:44.150 { 00:24:44.150 "name": "BaseBdev1", 00:24:44.150 "uuid": "becc1521-1690-5413-b5f5-4c40d9827cb8", 00:24:44.150 "is_configured": true, 00:24:44.150 "data_offset": 2048, 00:24:44.150 "data_size": 63488 00:24:44.150 }, 00:24:44.150 { 00:24:44.150 "name": "BaseBdev2", 00:24:44.150 "uuid": "7f7fe27a-d7c6-525a-a4a4-fc137dcfb8e2", 00:24:44.150 "is_configured": true, 00:24:44.150 "data_offset": 2048, 00:24:44.150 "data_size": 63488 00:24:44.150 }, 00:24:44.150 { 00:24:44.150 "name": "BaseBdev3", 00:24:44.150 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:44.150 "is_configured": true, 00:24:44.150 "data_offset": 2048, 00:24:44.150 "data_size": 63488 00:24:44.150 }, 00:24:44.150 { 00:24:44.151 "name": "BaseBdev4", 00:24:44.151 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:44.151 "is_configured": true, 00:24:44.151 "data_offset": 2048, 00:24:44.151 "data_size": 63488 00:24:44.151 } 00:24:44.151 ] 00:24:44.151 }' 00:24:44.151 12:08:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.151 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.081 12:08:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:45.081 12:08:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:45.081 [2024-11-29 12:08:50.547063] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.081 12:08:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:24:45.081 12:08:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.081 12:08:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:45.647 12:08:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:45.647 12:08:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:45.647 12:08:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:45.647 12:08:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@12 -- # local i 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:45.647 12:08:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:45.647 [2024-11-29 12:08:51.111040] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:24:45.647 /dev/nbd0 00:24:45.647 12:08:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:45.906 12:08:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:45.906 12:08:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:45.906 12:08:51 -- common/autotest_common.sh@867 -- # local i 00:24:45.906 12:08:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:45.906 12:08:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:45.906 12:08:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:45.906 12:08:51 -- common/autotest_common.sh@871 -- # break 00:24:45.906 12:08:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:45.906 12:08:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:45.906 12:08:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.906 1+0 records in 00:24:45.906 1+0 records out 00:24:45.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295251 s, 13.9 MB/s 00:24:45.906 12:08:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.906 12:08:51 -- common/autotest_common.sh@884 -- # size=4096 00:24:45.906 12:08:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.906 12:08:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:45.906 12:08:51 -- common/autotest_common.sh@887 -- # return 0 00:24:45.906 12:08:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.906 12:08:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:45.906 12:08:51 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:45.906 12:08:51 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:45.906 12:08:51 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:54.092 63488+0 records in 00:24:54.092 63488+0 records out 00:24:54.092 32505856 bytes (33 MB, 31 MiB) copied, 7.20042 s, 4.5 MB/s 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@51 -- # local i 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.092 [2024-11-29 12:08:58.676713] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@41 -- # break 00:24:54.092 12:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:54.092 [2024-11-29 12:08:58.941180] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.092 12:08:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.092 12:08:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.092 "name": "raid_bdev1", 00:24:54.092 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:54.092 "strip_size_kb": 0, 00:24:54.092 "state": "online", 00:24:54.092 "raid_level": "raid1", 00:24:54.092 "superblock": true, 00:24:54.092 "num_base_bdevs": 4, 00:24:54.092 "num_base_bdevs_discovered": 3, 00:24:54.092 "num_base_bdevs_operational": 3, 00:24:54.092 "base_bdevs_list": [ 00:24:54.092 { 00:24:54.092 "name": null, 00:24:54.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.092 "is_configured": false, 00:24:54.092 "data_offset": 2048, 00:24:54.092 "data_size": 63488 00:24:54.092 }, 00:24:54.092 { 00:24:54.092 "name": "BaseBdev2", 00:24:54.092 "uuid": "7f7fe27a-d7c6-525a-a4a4-fc137dcfb8e2", 00:24:54.092 "is_configured": true, 00:24:54.092 "data_offset": 2048, 00:24:54.092 "data_size": 63488 00:24:54.092 }, 00:24:54.092 { 00:24:54.092 "name": "BaseBdev3", 00:24:54.092 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:54.092 "is_configured": true, 00:24:54.092 "data_offset": 2048, 00:24:54.092 "data_size": 63488 00:24:54.092 }, 00:24:54.092 { 00:24:54.092 "name": "BaseBdev4", 00:24:54.092 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:54.092 "is_configured": true, 00:24:54.092 "data_offset": 2048, 00:24:54.092 "data_size": 63488 00:24:54.092 } 00:24:54.092 ] 00:24:54.092 }' 00:24:54.092 12:08:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.092 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:54.352 12:08:59 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:54.611 [2024-11-29 12:09:00.097423] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:54.611 [2024-11-29 12:09:00.097504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.611 [2024-11-29 12:09:00.102088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:24:54.611 [2024-11-29 12:09:00.104499] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:54.611 12:09:00 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:55.992 12:09:01 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:55.993 "name": "raid_bdev1", 00:24:55.993 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:55.993 "strip_size_kb": 0, 00:24:55.993 "state": "online", 00:24:55.993 "raid_level": "raid1", 00:24:55.993 "superblock": true, 00:24:55.993 "num_base_bdevs": 4, 00:24:55.993 "num_base_bdevs_discovered": 4, 00:24:55.993 "num_base_bdevs_operational": 4, 00:24:55.993 "process": { 00:24:55.993 "type": "rebuild", 00:24:55.993 "target": "spare", 00:24:55.993 "progress": { 00:24:55.993 "blocks": 24576, 00:24:55.993 "percent": 38 00:24:55.993 } 00:24:55.993 }, 00:24:55.993 "base_bdevs_list": [ 00:24:55.993 { 00:24:55.993 "name": "spare", 00:24:55.993 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:24:55.993 "is_configured": true, 00:24:55.993 "data_offset": 2048, 00:24:55.993 "data_size": 63488 00:24:55.993 }, 00:24:55.993 { 00:24:55.993 "name": "BaseBdev2", 00:24:55.993 "uuid": "7f7fe27a-d7c6-525a-a4a4-fc137dcfb8e2", 00:24:55.993 "is_configured": true, 00:24:55.993 "data_offset": 2048, 00:24:55.993 "data_size": 63488 00:24:55.993 }, 00:24:55.993 { 00:24:55.993 "name": "BaseBdev3", 00:24:55.993 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:55.993 "is_configured": true, 00:24:55.993 "data_offset": 2048, 00:24:55.993 "data_size": 63488 00:24:55.993 }, 00:24:55.993 { 00:24:55.993 "name": "BaseBdev4", 00:24:55.993 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:55.993 "is_configured": true, 00:24:55.993 "data_offset": 2048, 00:24:55.993 "data_size": 63488 00:24:55.993 } 00:24:55.993 ] 00:24:55.993 }' 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.993 12:09:01 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:56.259 [2024-11-29 12:09:01.695674] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.259 [2024-11-29 12:09:01.716309] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:56.259 [2024-11-29 12:09:01.716423] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.259 12:09:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.517 12:09:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.517 "name": "raid_bdev1", 00:24:56.517 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:56.517 "strip_size_kb": 0, 00:24:56.517 "state": "online", 00:24:56.517 "raid_level": "raid1", 00:24:56.517 "superblock": true, 00:24:56.517 "num_base_bdevs": 4, 00:24:56.517 "num_base_bdevs_discovered": 3, 00:24:56.517 "num_base_bdevs_operational": 3, 00:24:56.517 "base_bdevs_list": [ 00:24:56.517 { 00:24:56.517 "name": null, 00:24:56.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.517 "is_configured": false, 00:24:56.517 "data_offset": 2048, 00:24:56.517 "data_size": 63488 00:24:56.517 }, 00:24:56.517 { 00:24:56.517 "name": "BaseBdev2", 00:24:56.517 "uuid": "7f7fe27a-d7c6-525a-a4a4-fc137dcfb8e2", 00:24:56.517 "is_configured": true, 00:24:56.517 "data_offset": 2048, 00:24:56.517 "data_size": 63488 00:24:56.517 }, 00:24:56.517 { 00:24:56.517 "name": "BaseBdev3", 00:24:56.517 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:56.517 "is_configured": true, 00:24:56.517 "data_offset": 2048, 00:24:56.517 "data_size": 63488 00:24:56.517 }, 00:24:56.517 { 00:24:56.517 "name": "BaseBdev4", 00:24:56.517 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:56.517 "is_configured": true, 00:24:56.517 "data_offset": 2048, 00:24:56.517 "data_size": 63488 00:24:56.517 } 00:24:56.517 ] 00:24:56.517 }' 00:24:56.517 12:09:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.517 12:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:57.454 "name": "raid_bdev1", 00:24:57.454 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:57.454 "strip_size_kb": 0, 00:24:57.454 "state": "online", 00:24:57.454 "raid_level": "raid1", 00:24:57.454 "superblock": true, 00:24:57.454 "num_base_bdevs": 4, 00:24:57.454 "num_base_bdevs_discovered": 3, 00:24:57.454 "num_base_bdevs_operational": 3, 00:24:57.454 "base_bdevs_list": [ 00:24:57.454 { 00:24:57.454 "name": null, 00:24:57.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.454 "is_configured": false, 00:24:57.454 "data_offset": 2048, 00:24:57.454 "data_size": 63488 00:24:57.454 }, 00:24:57.454 { 00:24:57.454 "name": "BaseBdev2", 00:24:57.454 "uuid": "7f7fe27a-d7c6-525a-a4a4-fc137dcfb8e2", 00:24:57.454 "is_configured": true, 00:24:57.454 "data_offset": 2048, 00:24:57.454 "data_size": 63488 00:24:57.454 }, 00:24:57.454 { 00:24:57.454 "name": "BaseBdev3", 00:24:57.454 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:57.454 "is_configured": true, 00:24:57.454 "data_offset": 2048, 00:24:57.454 "data_size": 63488 00:24:57.454 }, 00:24:57.454 { 00:24:57.454 "name": "BaseBdev4", 00:24:57.454 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:57.454 "is_configured": true, 00:24:57.454 "data_offset": 2048, 00:24:57.454 "data_size": 63488 00:24:57.454 } 00:24:57.454 ] 00:24:57.454 }' 00:24:57.454 12:09:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:57.713 12:09:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:57.713 12:09:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:57.713 12:09:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:57.713 12:09:03 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:57.971 [2024-11-29 12:09:03.278036] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:57.971 [2024-11-29 12:09:03.278102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:57.971 [2024-11-29 12:09:03.282582] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:24:57.971 [2024-11-29 12:09:03.284859] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.971 12:09:03 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.907 12:09:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.166 "name": "raid_bdev1", 00:24:59.166 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:59.166 "strip_size_kb": 0, 00:24:59.166 "state": "online", 00:24:59.166 "raid_level": "raid1", 00:24:59.166 "superblock": true, 00:24:59.166 "num_base_bdevs": 4, 00:24:59.166 "num_base_bdevs_discovered": 4, 00:24:59.166 "num_base_bdevs_operational": 4, 00:24:59.166 "process": { 00:24:59.166 "type": "rebuild", 00:24:59.166 "target": "spare", 00:24:59.166 "progress": { 00:24:59.166 "blocks": 24576, 00:24:59.166 "percent": 38 00:24:59.166 } 00:24:59.166 }, 00:24:59.166 "base_bdevs_list": [ 00:24:59.166 { 00:24:59.166 "name": "spare", 00:24:59.166 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:24:59.166 "is_configured": true, 00:24:59.166 "data_offset": 2048, 00:24:59.166 "data_size": 63488 00:24:59.166 }, 00:24:59.166 { 00:24:59.166 "name": "BaseBdev2", 00:24:59.166 "uuid": "7f7fe27a-d7c6-525a-a4a4-fc137dcfb8e2", 00:24:59.166 "is_configured": true, 00:24:59.166 "data_offset": 2048, 00:24:59.166 "data_size": 63488 00:24:59.166 }, 00:24:59.166 { 00:24:59.166 "name": "BaseBdev3", 00:24:59.166 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:59.166 "is_configured": true, 00:24:59.166 "data_offset": 2048, 00:24:59.166 "data_size": 63488 00:24:59.166 }, 00:24:59.166 { 00:24:59.166 "name": "BaseBdev4", 00:24:59.166 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:59.166 "is_configured": true, 00:24:59.166 "data_offset": 2048, 00:24:59.166 "data_size": 63488 00:24:59.166 } 00:24:59.166 ] 00:24:59.166 }' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:59.166 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:59.166 12:09:04 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:59.425 [2024-11-29 12:09:04.863826] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:59.425 [2024-11-29 12:09:04.895239] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.687 12:09:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.945 "name": "raid_bdev1", 00:24:59.945 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:24:59.945 "strip_size_kb": 0, 00:24:59.945 "state": "online", 00:24:59.945 "raid_level": "raid1", 00:24:59.945 "superblock": true, 00:24:59.945 "num_base_bdevs": 4, 00:24:59.945 "num_base_bdevs_discovered": 3, 00:24:59.945 "num_base_bdevs_operational": 3, 00:24:59.945 "process": { 00:24:59.945 "type": "rebuild", 00:24:59.945 "target": "spare", 00:24:59.945 "progress": { 00:24:59.945 "blocks": 38912, 00:24:59.945 "percent": 61 00:24:59.945 } 00:24:59.945 }, 00:24:59.945 "base_bdevs_list": [ 00:24:59.945 { 00:24:59.945 "name": "spare", 00:24:59.945 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:24:59.945 "is_configured": true, 00:24:59.945 "data_offset": 2048, 00:24:59.945 "data_size": 63488 00:24:59.945 }, 00:24:59.945 { 00:24:59.945 "name": null, 00:24:59.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.945 "is_configured": false, 00:24:59.945 "data_offset": 2048, 00:24:59.945 "data_size": 63488 00:24:59.945 }, 00:24:59.945 { 00:24:59.945 "name": "BaseBdev3", 00:24:59.945 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:24:59.945 "is_configured": true, 00:24:59.945 "data_offset": 2048, 00:24:59.945 "data_size": 63488 00:24:59.945 }, 00:24:59.945 { 00:24:59.945 "name": "BaseBdev4", 00:24:59.945 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:24:59.945 "is_configured": true, 00:24:59.945 "data_offset": 2048, 00:24:59.945 "data_size": 63488 00:24:59.945 } 00:24:59.945 ] 00:24:59.945 }' 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@657 -- # local timeout=519 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:59.945 12:09:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.946 12:09:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:59.946 12:09:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:59.946 12:09:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:59.946 12:09:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:59.946 12:09:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.946 12:09:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.204 12:09:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:00.204 "name": "raid_bdev1", 00:25:00.204 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:25:00.204 "strip_size_kb": 0, 00:25:00.204 "state": "online", 00:25:00.204 "raid_level": "raid1", 00:25:00.204 "superblock": true, 00:25:00.204 "num_base_bdevs": 4, 00:25:00.204 "num_base_bdevs_discovered": 3, 00:25:00.204 "num_base_bdevs_operational": 3, 00:25:00.204 "process": { 00:25:00.204 "type": "rebuild", 00:25:00.204 "target": "spare", 00:25:00.204 "progress": { 00:25:00.204 "blocks": 45056, 00:25:00.204 "percent": 70 00:25:00.204 } 00:25:00.204 }, 00:25:00.204 "base_bdevs_list": [ 00:25:00.204 { 00:25:00.204 "name": "spare", 00:25:00.204 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:25:00.204 "is_configured": true, 00:25:00.204 "data_offset": 2048, 00:25:00.204 "data_size": 63488 00:25:00.204 }, 00:25:00.204 { 00:25:00.204 "name": null, 00:25:00.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.204 "is_configured": false, 00:25:00.204 "data_offset": 2048, 00:25:00.204 "data_size": 63488 00:25:00.204 }, 00:25:00.204 { 00:25:00.204 "name": "BaseBdev3", 00:25:00.204 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:25:00.204 "is_configured": true, 00:25:00.204 "data_offset": 2048, 00:25:00.204 "data_size": 63488 00:25:00.204 }, 00:25:00.204 { 00:25:00.204 "name": "BaseBdev4", 00:25:00.204 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:25:00.204 "is_configured": true, 00:25:00.204 "data_offset": 2048, 00:25:00.204 "data_size": 63488 00:25:00.204 } 00:25:00.204 ] 00:25:00.204 }' 00:25:00.204 12:09:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:00.204 12:09:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.204 12:09:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:00.204 12:09:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.204 12:09:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:01.140 [2024-11-29 12:09:06.405740] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:01.140 [2024-11-29 12:09:06.405875] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:01.140 [2024-11-29 12:09:06.406084] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.398 12:09:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.656 12:09:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.656 "name": "raid_bdev1", 00:25:01.656 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:25:01.656 "strip_size_kb": 0, 00:25:01.656 "state": "online", 00:25:01.656 "raid_level": "raid1", 00:25:01.656 "superblock": true, 00:25:01.656 "num_base_bdevs": 4, 00:25:01.656 "num_base_bdevs_discovered": 3, 00:25:01.656 "num_base_bdevs_operational": 3, 00:25:01.656 "base_bdevs_list": [ 00:25:01.656 { 00:25:01.656 "name": "spare", 00:25:01.656 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:25:01.656 "is_configured": true, 00:25:01.656 "data_offset": 2048, 00:25:01.656 "data_size": 63488 00:25:01.656 }, 00:25:01.656 { 00:25:01.656 "name": null, 00:25:01.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.656 "is_configured": false, 00:25:01.656 "data_offset": 2048, 00:25:01.656 "data_size": 63488 00:25:01.656 }, 00:25:01.656 { 00:25:01.656 "name": "BaseBdev3", 00:25:01.656 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:25:01.656 "is_configured": true, 00:25:01.656 "data_offset": 2048, 00:25:01.656 "data_size": 63488 00:25:01.656 }, 00:25:01.656 { 00:25:01.656 "name": "BaseBdev4", 00:25:01.656 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:25:01.656 "is_configured": true, 00:25:01.656 "data_offset": 2048, 00:25:01.656 "data_size": 63488 00:25:01.656 } 00:25:01.656 ] 00:25:01.656 }' 00:25:01.656 12:09:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@660 -- # break 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.656 12:09:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.914 12:09:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.914 "name": "raid_bdev1", 00:25:01.914 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:25:01.914 "strip_size_kb": 0, 00:25:01.914 "state": "online", 00:25:01.914 "raid_level": "raid1", 00:25:01.914 "superblock": true, 00:25:01.914 "num_base_bdevs": 4, 00:25:01.914 "num_base_bdevs_discovered": 3, 00:25:01.914 "num_base_bdevs_operational": 3, 00:25:01.914 "base_bdevs_list": [ 00:25:01.914 { 00:25:01.914 "name": "spare", 00:25:01.914 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:25:01.914 "is_configured": true, 00:25:01.914 "data_offset": 2048, 00:25:01.914 "data_size": 63488 00:25:01.914 }, 00:25:01.914 { 00:25:01.914 "name": null, 00:25:01.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.914 "is_configured": false, 00:25:01.914 "data_offset": 2048, 00:25:01.914 "data_size": 63488 00:25:01.914 }, 00:25:01.914 { 00:25:01.914 "name": "BaseBdev3", 00:25:01.914 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:25:01.914 "is_configured": true, 00:25:01.914 "data_offset": 2048, 00:25:01.914 "data_size": 63488 00:25:01.914 }, 00:25:01.914 { 00:25:01.914 "name": "BaseBdev4", 00:25:01.914 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:25:01.914 "is_configured": true, 00:25:01.914 "data_offset": 2048, 00:25:01.914 "data_size": 63488 00:25:01.914 } 00:25:01.914 ] 00:25:01.914 }' 00:25:01.914 12:09:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.914 12:09:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:01.914 12:09:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.176 12:09:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.435 12:09:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:02.435 "name": "raid_bdev1", 00:25:02.435 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:25:02.435 "strip_size_kb": 0, 00:25:02.435 "state": "online", 00:25:02.435 "raid_level": "raid1", 00:25:02.435 "superblock": true, 00:25:02.435 "num_base_bdevs": 4, 00:25:02.435 "num_base_bdevs_discovered": 3, 00:25:02.435 "num_base_bdevs_operational": 3, 00:25:02.435 "base_bdevs_list": [ 00:25:02.435 { 00:25:02.435 "name": "spare", 00:25:02.435 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 }, 00:25:02.435 { 00:25:02.435 "name": null, 00:25:02.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.435 "is_configured": false, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 }, 00:25:02.435 { 00:25:02.435 "name": "BaseBdev3", 00:25:02.435 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 }, 00:25:02.435 { 00:25:02.435 "name": "BaseBdev4", 00:25:02.435 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 } 00:25:02.435 ] 00:25:02.435 }' 00:25:02.435 12:09:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:02.435 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:25:03.061 12:09:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:03.319 [2024-11-29 12:09:08.645160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.319 [2024-11-29 12:09:08.645212] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.319 [2024-11-29 12:09:08.645343] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.319 [2024-11-29 12:09:08.645472] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.319 [2024-11-29 12:09:08.645488] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:03.320 12:09:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:03.320 12:09:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.578 12:09:08 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:03.578 12:09:08 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:03.578 12:09:08 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@12 -- # local i 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:03.578 12:09:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:03.836 /dev/nbd0 00:25:03.836 12:09:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:03.836 12:09:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:03.836 12:09:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:03.836 12:09:09 -- common/autotest_common.sh@867 -- # local i 00:25:03.836 12:09:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:03.836 12:09:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:03.836 12:09:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:03.836 12:09:09 -- common/autotest_common.sh@871 -- # break 00:25:03.836 12:09:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:03.836 12:09:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:03.836 12:09:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:03.836 1+0 records in 00:25:03.836 1+0 records out 00:25:03.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070254 s, 5.8 MB/s 00:25:03.836 12:09:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:03.836 12:09:09 -- common/autotest_common.sh@884 -- # size=4096 00:25:03.836 12:09:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:03.836 12:09:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:03.836 12:09:09 -- common/autotest_common.sh@887 -- # return 0 00:25:03.836 12:09:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:03.836 12:09:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:03.836 12:09:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:04.096 /dev/nbd1 00:25:04.096 12:09:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:04.096 12:09:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:04.096 12:09:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:04.096 12:09:09 -- common/autotest_common.sh@867 -- # local i 00:25:04.096 12:09:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:04.096 12:09:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:04.096 12:09:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:04.096 12:09:09 -- common/autotest_common.sh@871 -- # break 00:25:04.096 12:09:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:04.096 12:09:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:04.096 12:09:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:04.096 1+0 records in 00:25:04.096 1+0 records out 00:25:04.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000711267 s, 5.8 MB/s 00:25:04.096 12:09:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.096 12:09:09 -- common/autotest_common.sh@884 -- # size=4096 00:25:04.096 12:09:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.096 12:09:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:04.096 12:09:09 -- common/autotest_common.sh@887 -- # return 0 00:25:04.096 12:09:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:04.096 12:09:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:04.096 12:09:09 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:04.355 12:09:09 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:04.355 12:09:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:04.355 12:09:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:04.355 12:09:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:04.355 12:09:09 -- bdev/nbd_common.sh@51 -- # local i 00:25:04.355 12:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:04.355 12:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@41 -- # break 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:04.613 12:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@41 -- # break 00:25:04.871 12:09:10 -- bdev/nbd_common.sh@45 -- # return 0 00:25:04.871 12:09:10 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:04.871 12:09:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:04.871 12:09:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:04.871 12:09:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:05.130 12:09:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:05.389 [2024-11-29 12:09:10.717161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:05.389 [2024-11-29 12:09:10.717289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.389 [2024-11-29 12:09:10.717344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:05.389 [2024-11-29 12:09:10.717370] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.389 [2024-11-29 12:09:10.720159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.389 [2024-11-29 12:09:10.720244] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:05.389 [2024-11-29 12:09:10.720359] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:05.389 [2024-11-29 12:09:10.720448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.389 BaseBdev1 00:25:05.389 12:09:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:05.389 12:09:10 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:25:05.389 12:09:10 -- bdev/bdev_raid.sh@696 -- # continue 00:25:05.389 12:09:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:05.389 12:09:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:05.389 12:09:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:05.648 12:09:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:05.907 [2024-11-29 12:09:11.221259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:05.907 [2024-11-29 12:09:11.221384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.907 [2024-11-29 12:09:11.221436] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:05.907 [2024-11-29 12:09:11.221463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.907 [2024-11-29 12:09:11.221953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.907 [2024-11-29 12:09:11.222041] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:05.907 [2024-11-29 12:09:11.222140] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:05.908 [2024-11-29 12:09:11.222157] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:25:05.908 [2024-11-29 12:09:11.222165] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.908 [2024-11-29 12:09:11.222201] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:25:05.908 [2024-11-29 12:09:11.222262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.908 BaseBdev3 00:25:05.908 12:09:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:05.908 12:09:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:05.908 12:09:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:06.166 12:09:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:06.425 [2024-11-29 12:09:11.693334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:06.425 [2024-11-29 12:09:11.693456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.425 [2024-11-29 12:09:11.693507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:06.425 [2024-11-29 12:09:11.693541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.425 [2024-11-29 12:09:11.694052] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.425 [2024-11-29 12:09:11.694121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:06.425 [2024-11-29 12:09:11.694220] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:06.425 [2024-11-29 12:09:11.694267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:06.425 BaseBdev4 00:25:06.425 12:09:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:06.683 12:09:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:06.683 [2024-11-29 12:09:12.181476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:06.683 [2024-11-29 12:09:12.181597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.683 [2024-11-29 12:09:12.181641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:06.683 [2024-11-29 12:09:12.181674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.683 [2024-11-29 12:09:12.182224] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.683 [2024-11-29 12:09:12.182294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:06.683 [2024-11-29 12:09:12.182417] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:06.683 [2024-11-29 12:09:12.182467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:06.683 spare 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.942 12:09:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.942 [2024-11-29 12:09:12.282624] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:25:06.942 [2024-11-29 12:09:12.282680] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:06.942 [2024-11-29 12:09:12.282894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf0b0 00:25:06.942 [2024-11-29 12:09:12.283430] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:25:06.942 [2024-11-29 12:09:12.283456] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:25:06.942 [2024-11-29 12:09:12.283611] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.201 12:09:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.201 "name": "raid_bdev1", 00:25:07.201 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:25:07.201 "strip_size_kb": 0, 00:25:07.201 "state": "online", 00:25:07.201 "raid_level": "raid1", 00:25:07.201 "superblock": true, 00:25:07.201 "num_base_bdevs": 4, 00:25:07.201 "num_base_bdevs_discovered": 3, 00:25:07.201 "num_base_bdevs_operational": 3, 00:25:07.201 "base_bdevs_list": [ 00:25:07.201 { 00:25:07.201 "name": "spare", 00:25:07.201 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:25:07.201 "is_configured": true, 00:25:07.201 "data_offset": 2048, 00:25:07.201 "data_size": 63488 00:25:07.201 }, 00:25:07.201 { 00:25:07.201 "name": null, 00:25:07.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.201 "is_configured": false, 00:25:07.201 "data_offset": 2048, 00:25:07.201 "data_size": 63488 00:25:07.201 }, 00:25:07.201 { 00:25:07.201 "name": "BaseBdev3", 00:25:07.201 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:25:07.201 "is_configured": true, 00:25:07.201 "data_offset": 2048, 00:25:07.201 "data_size": 63488 00:25:07.201 }, 00:25:07.201 { 00:25:07.201 "name": "BaseBdev4", 00:25:07.201 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:25:07.201 "is_configured": true, 00:25:07.201 "data_offset": 2048, 00:25:07.201 "data_size": 63488 00:25:07.201 } 00:25:07.201 ] 00:25:07.201 }' 00:25:07.201 12:09:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.201 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.769 12:09:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.028 12:09:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:08.028 "name": "raid_bdev1", 00:25:08.028 "uuid": "dadff76b-067a-445f-bc2c-f8e2214a8925", 00:25:08.028 "strip_size_kb": 0, 00:25:08.028 "state": "online", 00:25:08.028 "raid_level": "raid1", 00:25:08.028 "superblock": true, 00:25:08.028 "num_base_bdevs": 4, 00:25:08.028 "num_base_bdevs_discovered": 3, 00:25:08.028 "num_base_bdevs_operational": 3, 00:25:08.028 "base_bdevs_list": [ 00:25:08.028 { 00:25:08.028 "name": "spare", 00:25:08.028 "uuid": "0c6a00fc-870d-5976-9640-fe3f0c60605d", 00:25:08.028 "is_configured": true, 00:25:08.028 "data_offset": 2048, 00:25:08.028 "data_size": 63488 00:25:08.028 }, 00:25:08.028 { 00:25:08.028 "name": null, 00:25:08.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.028 "is_configured": false, 00:25:08.028 "data_offset": 2048, 00:25:08.028 "data_size": 63488 00:25:08.028 }, 00:25:08.028 { 00:25:08.028 "name": "BaseBdev3", 00:25:08.028 "uuid": "53cc66ba-d02b-5ff4-917c-c0c3fa6849a0", 00:25:08.028 "is_configured": true, 00:25:08.028 "data_offset": 2048, 00:25:08.028 "data_size": 63488 00:25:08.028 }, 00:25:08.028 { 00:25:08.028 "name": "BaseBdev4", 00:25:08.028 "uuid": "7ca2db3f-ed88-539a-9e24-f86e221e92c3", 00:25:08.028 "is_configured": true, 00:25:08.028 "data_offset": 2048, 00:25:08.028 "data_size": 63488 00:25:08.028 } 00:25:08.028 ] 00:25:08.028 }' 00:25:08.028 12:09:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:08.028 12:09:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:08.028 12:09:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:08.286 12:09:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:08.286 12:09:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.286 12:09:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:08.545 12:09:13 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.545 12:09:13 -- bdev/bdev_raid.sh@709 -- # killprocess 136581 00:25:08.545 12:09:13 -- common/autotest_common.sh@936 -- # '[' -z 136581 ']' 00:25:08.545 12:09:13 -- common/autotest_common.sh@940 -- # kill -0 136581 00:25:08.545 12:09:13 -- common/autotest_common.sh@941 -- # uname 00:25:08.545 12:09:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.545 12:09:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136581 00:25:08.545 12:09:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:08.545 killing process with pid 136581 00:25:08.545 12:09:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:08.545 12:09:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136581' 00:25:08.545 Received shutdown signal, test time was about 60.000000 seconds 00:25:08.545 00:25:08.545 Latency(us) 00:25:08.545 [2024-11-29T12:09:14.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.545 [2024-11-29T12:09:14.056Z] =================================================================================================================== 00:25:08.545 [2024-11-29T12:09:14.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:08.545 12:09:13 -- common/autotest_common.sh@955 -- # kill 136581 00:25:08.545 [2024-11-29 12:09:13.831522] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:08.545 12:09:13 -- common/autotest_common.sh@960 -- # wait 136581 00:25:08.545 [2024-11-29 12:09:13.831630] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.545 [2024-11-29 12:09:13.831729] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.545 [2024-11-29 12:09:13.831751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:25:08.545 [2024-11-29 12:09:13.896291] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:08.809 00:25:08.809 real 0m28.704s 00:25:08.809 user 0m42.317s 00:25:08.809 sys 0m4.816s 00:25:08.809 12:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:08.809 ************************************ 00:25:08.809 END TEST raid_rebuild_test_sb 00:25:08.809 ************************************ 00:25:08.809 12:09:14 -- common/autotest_common.sh@10 -- # set +x 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:25:08.809 12:09:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:08.809 12:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:08.809 12:09:14 -- common/autotest_common.sh@10 -- # set +x 00:25:08.809 ************************************ 00:25:08.809 START TEST raid_rebuild_test_io 00:25:08.809 ************************************ 00:25:08.809 12:09:14 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=137252 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:08.809 12:09:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137252 /var/tmp/spdk-raid.sock 00:25:08.809 12:09:14 -- common/autotest_common.sh@829 -- # '[' -z 137252 ']' 00:25:08.809 12:09:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:08.809 12:09:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.809 12:09:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:08.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:08.809 12:09:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.809 12:09:14 -- common/autotest_common.sh@10 -- # set +x 00:25:08.809 [2024-11-29 12:09:14.273201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:08.809 [2024-11-29 12:09:14.273395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137252 ] 00:25:08.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:08.809 Zero copy mechanism will not be used. 00:25:09.076 [2024-11-29 12:09:14.414754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.076 [2024-11-29 12:09:14.510614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.076 [2024-11-29 12:09:14.565546] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:10.012 12:09:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.012 12:09:15 -- common/autotest_common.sh@862 -- # return 0 00:25:10.012 12:09:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:10.012 12:09:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:10.012 12:09:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:10.012 BaseBdev1 00:25:10.012 12:09:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:10.012 12:09:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:10.012 12:09:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:10.271 BaseBdev2 00:25:10.271 12:09:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:10.271 12:09:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:10.271 12:09:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:10.530 BaseBdev3 00:25:10.530 12:09:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:10.530 12:09:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:10.530 12:09:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:10.788 BaseBdev4 00:25:10.788 12:09:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:11.046 spare_malloc 00:25:11.046 12:09:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:11.304 spare_delay 00:25:11.304 12:09:16 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:11.562 [2024-11-29 12:09:17.013423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:11.562 [2024-11-29 12:09:17.013560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.562 [2024-11-29 12:09:17.013609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:11.562 [2024-11-29 12:09:17.013658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.562 [2024-11-29 12:09:17.016579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.562 [2024-11-29 12:09:17.016664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:11.562 spare 00:25:11.562 12:09:17 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:11.821 [2024-11-29 12:09:17.281634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.821 [2024-11-29 12:09:17.284006] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:11.821 [2024-11-29 12:09:17.284084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:11.821 [2024-11-29 12:09:17.284129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:11.821 [2024-11-29 12:09:17.284236] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:25:11.821 [2024-11-29 12:09:17.284251] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:11.821 [2024-11-29 12:09:17.284440] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:25:11.821 [2024-11-29 12:09:17.284925] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:25:11.821 [2024-11-29 12:09:17.284950] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:25:11.821 [2024-11-29 12:09:17.285166] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.821 12:09:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.080 12:09:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.080 "name": "raid_bdev1", 00:25:12.080 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:12.080 "strip_size_kb": 0, 00:25:12.080 "state": "online", 00:25:12.080 "raid_level": "raid1", 00:25:12.080 "superblock": false, 00:25:12.080 "num_base_bdevs": 4, 00:25:12.080 "num_base_bdevs_discovered": 4, 00:25:12.080 "num_base_bdevs_operational": 4, 00:25:12.080 "base_bdevs_list": [ 00:25:12.080 { 00:25:12.080 "name": "BaseBdev1", 00:25:12.080 "uuid": "2d0c28cd-0ec6-4a40-b8ac-cca08c268ac4", 00:25:12.080 "is_configured": true, 00:25:12.080 "data_offset": 0, 00:25:12.080 "data_size": 65536 00:25:12.080 }, 00:25:12.080 { 00:25:12.080 "name": "BaseBdev2", 00:25:12.080 "uuid": "2f8550e4-12f1-404b-925b-a1391076c97e", 00:25:12.080 "is_configured": true, 00:25:12.080 "data_offset": 0, 00:25:12.080 "data_size": 65536 00:25:12.080 }, 00:25:12.080 { 00:25:12.080 "name": "BaseBdev3", 00:25:12.080 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:12.080 "is_configured": true, 00:25:12.080 "data_offset": 0, 00:25:12.080 "data_size": 65536 00:25:12.080 }, 00:25:12.080 { 00:25:12.080 "name": "BaseBdev4", 00:25:12.080 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:12.080 "is_configured": true, 00:25:12.080 "data_offset": 0, 00:25:12.080 "data_size": 65536 00:25:12.080 } 00:25:12.080 ] 00:25:12.080 }' 00:25:12.080 12:09:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.080 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:25:12.648 12:09:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:12.648 12:09:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:12.909 [2024-11-29 12:09:18.358049] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.909 12:09:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:25:12.909 12:09:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:12.909 12:09:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.167 12:09:18 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:13.167 12:09:18 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:13.167 12:09:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:13.167 12:09:18 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:13.426 [2024-11-29 12:09:18.716571] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:25:13.426 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:13.426 Zero copy mechanism will not be used. 00:25:13.426 Running I/O for 60 seconds... 00:25:13.426 [2024-11-29 12:09:18.818486] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:13.426 [2024-11-29 12:09:18.833721] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.426 12:09:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.684 12:09:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.684 "name": "raid_bdev1", 00:25:13.684 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:13.684 "strip_size_kb": 0, 00:25:13.684 "state": "online", 00:25:13.684 "raid_level": "raid1", 00:25:13.684 "superblock": false, 00:25:13.684 "num_base_bdevs": 4, 00:25:13.684 "num_base_bdevs_discovered": 3, 00:25:13.684 "num_base_bdevs_operational": 3, 00:25:13.684 "base_bdevs_list": [ 00:25:13.684 { 00:25:13.684 "name": null, 00:25:13.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.684 "is_configured": false, 00:25:13.684 "data_offset": 0, 00:25:13.684 "data_size": 65536 00:25:13.684 }, 00:25:13.684 { 00:25:13.684 "name": "BaseBdev2", 00:25:13.684 "uuid": "2f8550e4-12f1-404b-925b-a1391076c97e", 00:25:13.684 "is_configured": true, 00:25:13.684 "data_offset": 0, 00:25:13.684 "data_size": 65536 00:25:13.684 }, 00:25:13.684 { 00:25:13.684 "name": "BaseBdev3", 00:25:13.684 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:13.684 "is_configured": true, 00:25:13.684 "data_offset": 0, 00:25:13.684 "data_size": 65536 00:25:13.684 }, 00:25:13.685 { 00:25:13.685 "name": "BaseBdev4", 00:25:13.685 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:13.685 "is_configured": true, 00:25:13.685 "data_offset": 0, 00:25:13.685 "data_size": 65536 00:25:13.685 } 00:25:13.685 ] 00:25:13.685 }' 00:25:13.685 12:09:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.685 12:09:19 -- common/autotest_common.sh@10 -- # set +x 00:25:14.621 12:09:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:14.621 [2024-11-29 12:09:20.088732] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:14.621 [2024-11-29 12:09:20.088817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.880 12:09:20 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:14.880 [2024-11-29 12:09:20.153539] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:25:14.880 [2024-11-29 12:09:20.156021] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:14.880 [2024-11-29 12:09:20.292751] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:14.880 [2024-11-29 12:09:20.294201] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:15.141 [2024-11-29 12:09:20.528842] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:15.141 [2024-11-29 12:09:20.529202] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:15.710 [2024-11-29 12:09:20.986163] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.710 12:09:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.968 [2024-11-29 12:09:21.239736] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:15.968 [2024-11-29 12:09:21.352459] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:15.968 [2024-11-29 12:09:21.352824] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:15.968 12:09:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.968 "name": "raid_bdev1", 00:25:15.968 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:15.968 "strip_size_kb": 0, 00:25:15.968 "state": "online", 00:25:15.968 "raid_level": "raid1", 00:25:15.968 "superblock": false, 00:25:15.968 "num_base_bdevs": 4, 00:25:15.968 "num_base_bdevs_discovered": 4, 00:25:15.968 "num_base_bdevs_operational": 4, 00:25:15.968 "process": { 00:25:15.968 "type": "rebuild", 00:25:15.968 "target": "spare", 00:25:15.968 "progress": { 00:25:15.968 "blocks": 16384, 00:25:15.968 "percent": 25 00:25:15.968 } 00:25:15.968 }, 00:25:15.968 "base_bdevs_list": [ 00:25:15.968 { 00:25:15.968 "name": "spare", 00:25:15.968 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:15.968 "is_configured": true, 00:25:15.968 "data_offset": 0, 00:25:15.968 "data_size": 65536 00:25:15.968 }, 00:25:15.968 { 00:25:15.968 "name": "BaseBdev2", 00:25:15.968 "uuid": "2f8550e4-12f1-404b-925b-a1391076c97e", 00:25:15.969 "is_configured": true, 00:25:15.969 "data_offset": 0, 00:25:15.969 "data_size": 65536 00:25:15.969 }, 00:25:15.969 { 00:25:15.969 "name": "BaseBdev3", 00:25:15.969 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:15.969 "is_configured": true, 00:25:15.969 "data_offset": 0, 00:25:15.969 "data_size": 65536 00:25:15.969 }, 00:25:15.969 { 00:25:15.969 "name": "BaseBdev4", 00:25:15.969 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:15.969 "is_configured": true, 00:25:15.969 "data_offset": 0, 00:25:15.969 "data_size": 65536 00:25:15.969 } 00:25:15.969 ] 00:25:15.969 }' 00:25:15.969 12:09:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.969 12:09:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.969 12:09:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.227 12:09:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.227 12:09:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:16.227 [2024-11-29 12:09:21.627059] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:16.227 [2024-11-29 12:09:21.736274] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:16.227 [2024-11-29 12:09:21.737051] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:16.486 [2024-11-29 12:09:21.759046] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.486 [2024-11-29 12:09:21.850532] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:16.486 [2024-11-29 12:09:21.961011] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:16.486 [2024-11-29 12:09:21.965268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.486 [2024-11-29 12:09:21.988869] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.744 12:09:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.003 12:09:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:17.003 "name": "raid_bdev1", 00:25:17.003 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:17.003 "strip_size_kb": 0, 00:25:17.003 "state": "online", 00:25:17.003 "raid_level": "raid1", 00:25:17.003 "superblock": false, 00:25:17.003 "num_base_bdevs": 4, 00:25:17.003 "num_base_bdevs_discovered": 3, 00:25:17.003 "num_base_bdevs_operational": 3, 00:25:17.003 "base_bdevs_list": [ 00:25:17.003 { 00:25:17.003 "name": null, 00:25:17.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.003 "is_configured": false, 00:25:17.003 "data_offset": 0, 00:25:17.003 "data_size": 65536 00:25:17.003 }, 00:25:17.003 { 00:25:17.003 "name": "BaseBdev2", 00:25:17.003 "uuid": "2f8550e4-12f1-404b-925b-a1391076c97e", 00:25:17.003 "is_configured": true, 00:25:17.003 "data_offset": 0, 00:25:17.003 "data_size": 65536 00:25:17.003 }, 00:25:17.003 { 00:25:17.003 "name": "BaseBdev3", 00:25:17.003 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:17.003 "is_configured": true, 00:25:17.003 "data_offset": 0, 00:25:17.003 "data_size": 65536 00:25:17.003 }, 00:25:17.003 { 00:25:17.003 "name": "BaseBdev4", 00:25:17.003 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:17.003 "is_configured": true, 00:25:17.003 "data_offset": 0, 00:25:17.003 "data_size": 65536 00:25:17.003 } 00:25:17.003 ] 00:25:17.003 }' 00:25:17.003 12:09:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:17.003 12:09:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.570 12:09:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.828 12:09:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:17.828 "name": "raid_bdev1", 00:25:17.828 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:17.828 "strip_size_kb": 0, 00:25:17.828 "state": "online", 00:25:17.828 "raid_level": "raid1", 00:25:17.828 "superblock": false, 00:25:17.828 "num_base_bdevs": 4, 00:25:17.828 "num_base_bdevs_discovered": 3, 00:25:17.828 "num_base_bdevs_operational": 3, 00:25:17.828 "base_bdevs_list": [ 00:25:17.828 { 00:25:17.828 "name": null, 00:25:17.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.828 "is_configured": false, 00:25:17.828 "data_offset": 0, 00:25:17.828 "data_size": 65536 00:25:17.828 }, 00:25:17.828 { 00:25:17.828 "name": "BaseBdev2", 00:25:17.828 "uuid": "2f8550e4-12f1-404b-925b-a1391076c97e", 00:25:17.828 "is_configured": true, 00:25:17.828 "data_offset": 0, 00:25:17.828 "data_size": 65536 00:25:17.828 }, 00:25:17.828 { 00:25:17.828 "name": "BaseBdev3", 00:25:17.828 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:17.828 "is_configured": true, 00:25:17.828 "data_offset": 0, 00:25:17.828 "data_size": 65536 00:25:17.828 }, 00:25:17.828 { 00:25:17.828 "name": "BaseBdev4", 00:25:17.828 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:17.828 "is_configured": true, 00:25:17.828 "data_offset": 0, 00:25:17.828 "data_size": 65536 00:25:17.828 } 00:25:17.828 ] 00:25:17.828 }' 00:25:17.828 12:09:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:17.828 12:09:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:17.828 12:09:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.087 12:09:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:18.087 12:09:23 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:18.345 [2024-11-29 12:09:23.611371] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:18.345 [2024-11-29 12:09:23.611448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:18.345 12:09:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:18.345 [2024-11-29 12:09:23.658234] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:18.345 [2024-11-29 12:09:23.660569] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:18.345 [2024-11-29 12:09:23.798031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:18.345 [2024-11-29 12:09:23.799516] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:18.604 [2024-11-29 12:09:24.009869] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:18.604 [2024-11-29 12:09:24.010214] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:18.863 [2024-11-29 12:09:24.362297] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:18.863 [2024-11-29 12:09:24.362956] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:19.122 [2024-11-29 12:09:24.513800] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.500 [2024-11-29 12:09:24.785973] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.500 "name": "raid_bdev1", 00:25:19.500 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:19.500 "strip_size_kb": 0, 00:25:19.500 "state": "online", 00:25:19.500 "raid_level": "raid1", 00:25:19.500 "superblock": false, 00:25:19.500 "num_base_bdevs": 4, 00:25:19.500 "num_base_bdevs_discovered": 4, 00:25:19.500 "num_base_bdevs_operational": 4, 00:25:19.500 "process": { 00:25:19.500 "type": "rebuild", 00:25:19.500 "target": "spare", 00:25:19.500 "progress": { 00:25:19.500 "blocks": 14336, 00:25:19.500 "percent": 21 00:25:19.500 } 00:25:19.500 }, 00:25:19.500 "base_bdevs_list": [ 00:25:19.500 { 00:25:19.500 "name": "spare", 00:25:19.500 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:19.500 "is_configured": true, 00:25:19.500 "data_offset": 0, 00:25:19.500 "data_size": 65536 00:25:19.500 }, 00:25:19.500 { 00:25:19.500 "name": "BaseBdev2", 00:25:19.500 "uuid": "2f8550e4-12f1-404b-925b-a1391076c97e", 00:25:19.500 "is_configured": true, 00:25:19.500 "data_offset": 0, 00:25:19.500 "data_size": 65536 00:25:19.500 }, 00:25:19.500 { 00:25:19.500 "name": "BaseBdev3", 00:25:19.500 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:19.500 "is_configured": true, 00:25:19.500 "data_offset": 0, 00:25:19.500 "data_size": 65536 00:25:19.500 }, 00:25:19.500 { 00:25:19.500 "name": "BaseBdev4", 00:25:19.500 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:19.500 "is_configured": true, 00:25:19.500 "data_offset": 0, 00:25:19.500 "data_size": 65536 00:25:19.500 } 00:25:19.500 ] 00:25:19.500 }' 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.500 [2024-11-29 12:09:24.927853] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:19.500 12:09:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.781 12:09:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.781 12:09:25 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:19.781 12:09:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:19.781 12:09:25 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:19.781 12:09:25 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:19.781 12:09:25 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:19.781 [2024-11-29 12:09:25.228760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:19.781 [2024-11-29 12:09:25.260678] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:19.781 [2024-11-29 12:09:25.262104] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:20.039 [2024-11-29 12:09:25.371555] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002390 00:25:20.039 [2024-11-29 12:09:25.371628] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.039 12:09:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.039 [2024-11-29 12:09:25.503409] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:20.296 "name": "raid_bdev1", 00:25:20.296 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:20.296 "strip_size_kb": 0, 00:25:20.296 "state": "online", 00:25:20.296 "raid_level": "raid1", 00:25:20.296 "superblock": false, 00:25:20.296 "num_base_bdevs": 4, 00:25:20.296 "num_base_bdevs_discovered": 3, 00:25:20.296 "num_base_bdevs_operational": 3, 00:25:20.296 "process": { 00:25:20.296 "type": "rebuild", 00:25:20.296 "target": "spare", 00:25:20.296 "progress": { 00:25:20.296 "blocks": 22528, 00:25:20.296 "percent": 34 00:25:20.296 } 00:25:20.296 }, 00:25:20.296 "base_bdevs_list": [ 00:25:20.296 { 00:25:20.296 "name": "spare", 00:25:20.296 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:20.296 "is_configured": true, 00:25:20.296 "data_offset": 0, 00:25:20.296 "data_size": 65536 00:25:20.296 }, 00:25:20.296 { 00:25:20.296 "name": null, 00:25:20.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.296 "is_configured": false, 00:25:20.296 "data_offset": 0, 00:25:20.296 "data_size": 65536 00:25:20.296 }, 00:25:20.296 { 00:25:20.296 "name": "BaseBdev3", 00:25:20.296 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:20.296 "is_configured": true, 00:25:20.296 "data_offset": 0, 00:25:20.296 "data_size": 65536 00:25:20.296 }, 00:25:20.296 { 00:25:20.296 "name": "BaseBdev4", 00:25:20.296 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:20.296 "is_configured": true, 00:25:20.296 "data_offset": 0, 00:25:20.296 "data_size": 65536 00:25:20.296 } 00:25:20.296 ] 00:25:20.296 }' 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@657 -- # local timeout=539 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.296 12:09:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.555 [2024-11-29 12:09:25.938482] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:20.555 12:09:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:20.555 "name": "raid_bdev1", 00:25:20.555 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:20.555 "strip_size_kb": 0, 00:25:20.555 "state": "online", 00:25:20.555 "raid_level": "raid1", 00:25:20.555 "superblock": false, 00:25:20.555 "num_base_bdevs": 4, 00:25:20.555 "num_base_bdevs_discovered": 3, 00:25:20.555 "num_base_bdevs_operational": 3, 00:25:20.555 "process": { 00:25:20.555 "type": "rebuild", 00:25:20.555 "target": "spare", 00:25:20.555 "progress": { 00:25:20.555 "blocks": 28672, 00:25:20.555 "percent": 43 00:25:20.555 } 00:25:20.555 }, 00:25:20.555 "base_bdevs_list": [ 00:25:20.555 { 00:25:20.555 "name": "spare", 00:25:20.555 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:20.555 "is_configured": true, 00:25:20.555 "data_offset": 0, 00:25:20.555 "data_size": 65536 00:25:20.555 }, 00:25:20.555 { 00:25:20.555 "name": null, 00:25:20.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.555 "is_configured": false, 00:25:20.555 "data_offset": 0, 00:25:20.555 "data_size": 65536 00:25:20.555 }, 00:25:20.555 { 00:25:20.555 "name": "BaseBdev3", 00:25:20.555 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:20.555 "is_configured": true, 00:25:20.555 "data_offset": 0, 00:25:20.555 "data_size": 65536 00:25:20.555 }, 00:25:20.555 { 00:25:20.555 "name": "BaseBdev4", 00:25:20.555 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:20.555 "is_configured": true, 00:25:20.555 "data_offset": 0, 00:25:20.555 "data_size": 65536 00:25:20.555 } 00:25:20.555 ] 00:25:20.555 }' 00:25:20.555 12:09:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:20.814 12:09:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.814 12:09:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:20.814 12:09:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.814 12:09:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:20.814 [2024-11-29 12:09:26.277616] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:21.748 [2024-11-29 12:09:26.901979] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:25:21.748 [2024-11-29 12:09:27.123509] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.748 12:09:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.006 12:09:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:22.006 "name": "raid_bdev1", 00:25:22.006 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:22.006 "strip_size_kb": 0, 00:25:22.006 "state": "online", 00:25:22.006 "raid_level": "raid1", 00:25:22.006 "superblock": false, 00:25:22.006 "num_base_bdevs": 4, 00:25:22.006 "num_base_bdevs_discovered": 3, 00:25:22.006 "num_base_bdevs_operational": 3, 00:25:22.006 "process": { 00:25:22.006 "type": "rebuild", 00:25:22.006 "target": "spare", 00:25:22.006 "progress": { 00:25:22.006 "blocks": 51200, 00:25:22.006 "percent": 78 00:25:22.006 } 00:25:22.006 }, 00:25:22.006 "base_bdevs_list": [ 00:25:22.006 { 00:25:22.006 "name": "spare", 00:25:22.006 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:22.006 "is_configured": true, 00:25:22.006 "data_offset": 0, 00:25:22.006 "data_size": 65536 00:25:22.006 }, 00:25:22.006 { 00:25:22.006 "name": null, 00:25:22.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.006 "is_configured": false, 00:25:22.006 "data_offset": 0, 00:25:22.006 "data_size": 65536 00:25:22.006 }, 00:25:22.006 { 00:25:22.006 "name": "BaseBdev3", 00:25:22.006 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:22.006 "is_configured": true, 00:25:22.006 "data_offset": 0, 00:25:22.006 "data_size": 65536 00:25:22.006 }, 00:25:22.006 { 00:25:22.006 "name": "BaseBdev4", 00:25:22.006 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:22.006 "is_configured": true, 00:25:22.006 "data_offset": 0, 00:25:22.006 "data_size": 65536 00:25:22.006 } 00:25:22.006 ] 00:25:22.006 }' 00:25:22.006 12:09:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:22.006 12:09:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:22.006 12:09:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:22.006 12:09:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:22.006 12:09:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:22.939 [2024-11-29 12:09:28.134843] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:22.939 [2024-11-29 12:09:28.234810] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:22.939 [2024-11-29 12:09:28.238417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.198 12:09:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:23.457 "name": "raid_bdev1", 00:25:23.457 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:23.457 "strip_size_kb": 0, 00:25:23.457 "state": "online", 00:25:23.457 "raid_level": "raid1", 00:25:23.457 "superblock": false, 00:25:23.457 "num_base_bdevs": 4, 00:25:23.457 "num_base_bdevs_discovered": 3, 00:25:23.457 "num_base_bdevs_operational": 3, 00:25:23.457 "base_bdevs_list": [ 00:25:23.457 { 00:25:23.457 "name": "spare", 00:25:23.457 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:23.457 "is_configured": true, 00:25:23.457 "data_offset": 0, 00:25:23.457 "data_size": 65536 00:25:23.457 }, 00:25:23.457 { 00:25:23.457 "name": null, 00:25:23.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.457 "is_configured": false, 00:25:23.457 "data_offset": 0, 00:25:23.457 "data_size": 65536 00:25:23.457 }, 00:25:23.457 { 00:25:23.457 "name": "BaseBdev3", 00:25:23.457 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:23.457 "is_configured": true, 00:25:23.457 "data_offset": 0, 00:25:23.457 "data_size": 65536 00:25:23.457 }, 00:25:23.457 { 00:25:23.457 "name": "BaseBdev4", 00:25:23.457 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:23.457 "is_configured": true, 00:25:23.457 "data_offset": 0, 00:25:23.457 "data_size": 65536 00:25:23.457 } 00:25:23.457 ] 00:25:23.457 }' 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@660 -- # break 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.457 12:09:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.716 12:09:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:23.716 "name": "raid_bdev1", 00:25:23.716 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:23.716 "strip_size_kb": 0, 00:25:23.716 "state": "online", 00:25:23.716 "raid_level": "raid1", 00:25:23.716 "superblock": false, 00:25:23.716 "num_base_bdevs": 4, 00:25:23.716 "num_base_bdevs_discovered": 3, 00:25:23.716 "num_base_bdevs_operational": 3, 00:25:23.716 "base_bdevs_list": [ 00:25:23.716 { 00:25:23.716 "name": "spare", 00:25:23.716 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:23.716 "is_configured": true, 00:25:23.716 "data_offset": 0, 00:25:23.716 "data_size": 65536 00:25:23.716 }, 00:25:23.716 { 00:25:23.716 "name": null, 00:25:23.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.716 "is_configured": false, 00:25:23.716 "data_offset": 0, 00:25:23.716 "data_size": 65536 00:25:23.716 }, 00:25:23.716 { 00:25:23.716 "name": "BaseBdev3", 00:25:23.716 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:23.716 "is_configured": true, 00:25:23.716 "data_offset": 0, 00:25:23.716 "data_size": 65536 00:25:23.716 }, 00:25:23.716 { 00:25:23.716 "name": "BaseBdev4", 00:25:23.716 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:23.716 "is_configured": true, 00:25:23.716 "data_offset": 0, 00:25:23.716 "data_size": 65536 00:25:23.716 } 00:25:23.716 ] 00:25:23.716 }' 00:25:23.716 12:09:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:23.716 12:09:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:23.716 12:09:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.973 12:09:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.230 12:09:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.230 "name": "raid_bdev1", 00:25:24.230 "uuid": "f1bde655-3e75-4030-bfe6-e05044f07d6d", 00:25:24.230 "strip_size_kb": 0, 00:25:24.230 "state": "online", 00:25:24.230 "raid_level": "raid1", 00:25:24.230 "superblock": false, 00:25:24.230 "num_base_bdevs": 4, 00:25:24.230 "num_base_bdevs_discovered": 3, 00:25:24.230 "num_base_bdevs_operational": 3, 00:25:24.230 "base_bdevs_list": [ 00:25:24.230 { 00:25:24.230 "name": "spare", 00:25:24.230 "uuid": "281892b9-7c30-5fab-b973-1e53538a2d26", 00:25:24.230 "is_configured": true, 00:25:24.230 "data_offset": 0, 00:25:24.230 "data_size": 65536 00:25:24.230 }, 00:25:24.230 { 00:25:24.230 "name": null, 00:25:24.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.230 "is_configured": false, 00:25:24.230 "data_offset": 0, 00:25:24.230 "data_size": 65536 00:25:24.230 }, 00:25:24.230 { 00:25:24.230 "name": "BaseBdev3", 00:25:24.230 "uuid": "b2c6c66f-ea08-44ed-9cf8-2c102fec171d", 00:25:24.230 "is_configured": true, 00:25:24.230 "data_offset": 0, 00:25:24.230 "data_size": 65536 00:25:24.230 }, 00:25:24.230 { 00:25:24.230 "name": "BaseBdev4", 00:25:24.230 "uuid": "8507159f-a8ee-4c79-a69c-84c096ed42eb", 00:25:24.230 "is_configured": true, 00:25:24.230 "data_offset": 0, 00:25:24.230 "data_size": 65536 00:25:24.230 } 00:25:24.230 ] 00:25:24.230 }' 00:25:24.230 12:09:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.230 12:09:29 -- common/autotest_common.sh@10 -- # set +x 00:25:24.795 12:09:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:25.054 [2024-11-29 12:09:30.345247] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:25.054 [2024-11-29 12:09:30.345588] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.054 00:25:25.054 Latency(us) 00:25:25.054 [2024-11-29T12:09:30.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.054 [2024-11-29T12:09:30.565Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:25.054 raid_bdev1 : 11.73 90.91 272.73 0.00 0.00 14438.14 342.57 117726.49 00:25:25.054 [2024-11-29T12:09:30.565Z] =================================================================================================================== 00:25:25.054 [2024-11-29T12:09:30.565Z] Total : 90.91 272.73 0.00 0.00 14438.14 342.57 117726.49 00:25:25.054 [2024-11-29 12:09:30.450443] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.054 0 00:25:25.054 [2024-11-29 12:09:30.450679] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.054 [2024-11-29 12:09:30.450931] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.054 [2024-11-29 12:09:30.451054] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:25:25.054 12:09:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.054 12:09:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:25.312 12:09:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:25.312 12:09:30 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:25.312 12:09:30 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@12 -- # local i 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:25.312 12:09:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:25.570 /dev/nbd0 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:25.570 12:09:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:25.570 12:09:30 -- common/autotest_common.sh@867 -- # local i 00:25:25.570 12:09:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:25.570 12:09:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:25.570 12:09:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:25.570 12:09:30 -- common/autotest_common.sh@871 -- # break 00:25:25.570 12:09:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:25.570 12:09:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:25.570 12:09:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:25.570 1+0 records in 00:25:25.570 1+0 records out 00:25:25.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367075 s, 11.2 MB/s 00:25:25.570 12:09:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:25.570 12:09:30 -- common/autotest_common.sh@884 -- # size=4096 00:25:25.570 12:09:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:25.570 12:09:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:25.570 12:09:30 -- common/autotest_common.sh@887 -- # return 0 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:25.570 12:09:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:25.570 12:09:30 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:25.570 12:09:30 -- bdev/bdev_raid.sh@678 -- # continue 00:25:25.570 12:09:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:25.570 12:09:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:25.570 12:09:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@12 -- # local i 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:25.570 12:09:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:25.829 /dev/nbd1 00:25:25.829 12:09:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:25.829 12:09:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:25.829 12:09:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:25.829 12:09:31 -- common/autotest_common.sh@867 -- # local i 00:25:25.829 12:09:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:25.829 12:09:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:25.829 12:09:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:25.829 12:09:31 -- common/autotest_common.sh@871 -- # break 00:25:25.829 12:09:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:25.829 12:09:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:25.829 12:09:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:25.829 1+0 records in 00:25:25.829 1+0 records out 00:25:25.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031751 s, 12.9 MB/s 00:25:25.829 12:09:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:25.829 12:09:31 -- common/autotest_common.sh@884 -- # size=4096 00:25:25.829 12:09:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:25.829 12:09:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:25.829 12:09:31 -- common/autotest_common.sh@887 -- # return 0 00:25:25.829 12:09:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:25.829 12:09:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:25.829 12:09:31 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:26.087 12:09:31 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:26.087 12:09:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.087 12:09:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:26.087 12:09:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:26.087 12:09:31 -- bdev/nbd_common.sh@51 -- # local i 00:25:26.087 12:09:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:26.087 12:09:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@41 -- # break 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@45 -- # return 0 00:25:26.345 12:09:31 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:26.345 12:09:31 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:26.345 12:09:31 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@12 -- # local i 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.345 12:09:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:26.603 /dev/nbd1 00:25:26.603 12:09:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:26.603 12:09:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:26.603 12:09:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:26.603 12:09:31 -- common/autotest_common.sh@867 -- # local i 00:25:26.603 12:09:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:26.603 12:09:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:26.603 12:09:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:26.603 12:09:31 -- common/autotest_common.sh@871 -- # break 00:25:26.603 12:09:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:26.603 12:09:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:26.603 12:09:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:26.603 1+0 records in 00:25:26.603 1+0 records out 00:25:26.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384288 s, 10.7 MB/s 00:25:26.603 12:09:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:26.603 12:09:31 -- common/autotest_common.sh@884 -- # size=4096 00:25:26.603 12:09:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:26.603 12:09:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:26.603 12:09:32 -- common/autotest_common.sh@887 -- # return 0 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.603 12:09:32 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:26.603 12:09:32 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@51 -- # local i 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:26.603 12:09:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@41 -- # break 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@45 -- # return 0 00:25:26.862 12:09:32 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@51 -- # local i 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:26.862 12:09:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@41 -- # break 00:25:27.119 12:09:32 -- bdev/nbd_common.sh@45 -- # return 0 00:25:27.119 12:09:32 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:27.119 12:09:32 -- bdev/bdev_raid.sh@709 -- # killprocess 137252 00:25:27.119 12:09:32 -- common/autotest_common.sh@936 -- # '[' -z 137252 ']' 00:25:27.119 12:09:32 -- common/autotest_common.sh@940 -- # kill -0 137252 00:25:27.119 12:09:32 -- common/autotest_common.sh@941 -- # uname 00:25:27.119 12:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.119 12:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137252 00:25:27.119 12:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:27.119 12:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:27.119 12:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137252' 00:25:27.119 killing process with pid 137252 00:25:27.119 12:09:32 -- common/autotest_common.sh@955 -- # kill 137252 00:25:27.119 Received shutdown signal, test time was about 13.900263 seconds 00:25:27.119 00:25:27.119 Latency(us) 00:25:27.119 [2024-11-29T12:09:32.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.119 [2024-11-29T12:09:32.630Z] =================================================================================================================== 00:25:27.119 [2024-11-29T12:09:32.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.119 [2024-11-29 12:09:32.619353] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:27.119 12:09:32 -- common/autotest_common.sh@960 -- # wait 137252 00:25:27.377 [2024-11-29 12:09:32.678340] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:27.635 12:09:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:27.635 00:25:27.635 real 0m18.732s 00:25:27.635 user 0m29.981s 00:25:27.635 sys 0m2.315s 00:25:27.635 12:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:27.635 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:25:27.635 ************************************ 00:25:27.635 END TEST raid_rebuild_test_io 00:25:27.635 ************************************ 00:25:27.635 12:09:32 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:25:27.635 12:09:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:27.635 12:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:27.635 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:25:27.635 ************************************ 00:25:27.635 START TEST raid_rebuild_test_sb_io 00:25:27.635 ************************************ 00:25:27.635 12:09:33 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:27.635 12:09:33 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=137757 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137757 /var/tmp/spdk-raid.sock 00:25:27.636 12:09:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:27.636 12:09:33 -- common/autotest_common.sh@829 -- # '[' -z 137757 ']' 00:25:27.636 12:09:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:27.636 12:09:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.636 12:09:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:27.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:27.636 12:09:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.636 12:09:33 -- common/autotest_common.sh@10 -- # set +x 00:25:27.636 [2024-11-29 12:09:33.061886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:27.636 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:27.636 Zero copy mechanism will not be used. 00:25:27.636 [2024-11-29 12:09:33.062113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137757 ] 00:25:27.894 [2024-11-29 12:09:33.200917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.894 [2024-11-29 12:09:33.297675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.894 [2024-11-29 12:09:33.353221] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.826 12:09:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.826 12:09:34 -- common/autotest_common.sh@862 -- # return 0 00:25:28.826 12:09:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:28.826 12:09:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:28.826 12:09:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:28.826 BaseBdev1_malloc 00:25:28.826 12:09:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:29.085 [2024-11-29 12:09:34.539029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:29.085 [2024-11-29 12:09:34.539174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.085 [2024-11-29 12:09:34.539230] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:25:29.085 [2024-11-29 12:09:34.539293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.085 [2024-11-29 12:09:34.542092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.085 [2024-11-29 12:09:34.542176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:29.085 BaseBdev1 00:25:29.085 12:09:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:29.085 12:09:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:29.085 12:09:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:29.372 BaseBdev2_malloc 00:25:29.372 12:09:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:29.630 [2024-11-29 12:09:35.010484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:29.630 [2024-11-29 12:09:35.010602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.630 [2024-11-29 12:09:35.010651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:29.630 [2024-11-29 12:09:35.010704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.630 [2024-11-29 12:09:35.013286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.630 [2024-11-29 12:09:35.013347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:29.630 BaseBdev2 00:25:29.630 12:09:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:29.630 12:09:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:29.630 12:09:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:29.888 BaseBdev3_malloc 00:25:29.888 12:09:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:30.146 [2024-11-29 12:09:35.503173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:30.146 [2024-11-29 12:09:35.503300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.146 [2024-11-29 12:09:35.503351] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:30.146 [2024-11-29 12:09:35.503411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.146 [2024-11-29 12:09:35.506019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.146 [2024-11-29 12:09:35.506093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:30.146 BaseBdev3 00:25:30.146 12:09:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:30.146 12:09:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:30.146 12:09:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:30.403 BaseBdev4_malloc 00:25:30.403 12:09:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:30.661 [2024-11-29 12:09:35.978839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:30.661 [2024-11-29 12:09:35.978959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.661 [2024-11-29 12:09:35.979002] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:30.661 [2024-11-29 12:09:35.979049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.661 [2024-11-29 12:09:35.981618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.661 [2024-11-29 12:09:35.981708] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:30.661 BaseBdev4 00:25:30.661 12:09:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:30.920 spare_malloc 00:25:30.920 12:09:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:31.178 spare_delay 00:25:31.178 12:09:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:31.436 [2024-11-29 12:09:36.742464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:31.436 [2024-11-29 12:09:36.742590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.436 [2024-11-29 12:09:36.742634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:31.436 [2024-11-29 12:09:36.742682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.436 [2024-11-29 12:09:36.745363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.436 [2024-11-29 12:09:36.745429] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:31.436 spare 00:25:31.436 12:09:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:31.693 [2024-11-29 12:09:37.042609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.693 [2024-11-29 12:09:37.044977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:31.693 [2024-11-29 12:09:37.045069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:31.693 [2024-11-29 12:09:37.045140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:31.693 [2024-11-29 12:09:37.045398] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:31.693 [2024-11-29 12:09:37.045426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:31.693 [2024-11-29 12:09:37.045615] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:31.693 [2024-11-29 12:09:37.046132] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:31.693 [2024-11-29 12:09:37.046159] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:31.693 [2024-11-29 12:09:37.046335] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.693 12:09:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.694 12:09:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.694 12:09:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.952 12:09:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.952 "name": "raid_bdev1", 00:25:31.952 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:31.952 "strip_size_kb": 0, 00:25:31.952 "state": "online", 00:25:31.952 "raid_level": "raid1", 00:25:31.952 "superblock": true, 00:25:31.952 "num_base_bdevs": 4, 00:25:31.952 "num_base_bdevs_discovered": 4, 00:25:31.952 "num_base_bdevs_operational": 4, 00:25:31.952 "base_bdevs_list": [ 00:25:31.952 { 00:25:31.952 "name": "BaseBdev1", 00:25:31.952 "uuid": "52adf1cf-da2e-5d08-bb35-917c42c7bc64", 00:25:31.952 "is_configured": true, 00:25:31.952 "data_offset": 2048, 00:25:31.952 "data_size": 63488 00:25:31.952 }, 00:25:31.952 { 00:25:31.952 "name": "BaseBdev2", 00:25:31.952 "uuid": "80f904b5-b44d-532a-ba58-4217e43b90c6", 00:25:31.952 "is_configured": true, 00:25:31.952 "data_offset": 2048, 00:25:31.952 "data_size": 63488 00:25:31.952 }, 00:25:31.952 { 00:25:31.952 "name": "BaseBdev3", 00:25:31.952 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:31.952 "is_configured": true, 00:25:31.952 "data_offset": 2048, 00:25:31.952 "data_size": 63488 00:25:31.952 }, 00:25:31.952 { 00:25:31.952 "name": "BaseBdev4", 00:25:31.952 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:31.952 "is_configured": true, 00:25:31.952 "data_offset": 2048, 00:25:31.952 "data_size": 63488 00:25:31.952 } 00:25:31.952 ] 00:25:31.952 }' 00:25:31.952 12:09:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.952 12:09:37 -- common/autotest_common.sh@10 -- # set +x 00:25:32.517 12:09:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:32.517 12:09:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:32.775 [2024-11-29 12:09:38.163185] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:32.775 12:09:38 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:25:32.775 12:09:38 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.775 12:09:38 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:33.033 12:09:38 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:33.033 12:09:38 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:33.033 12:09:38 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:33.033 12:09:38 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:33.033 [2024-11-29 12:09:38.497602] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:25:33.033 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:33.033 Zero copy mechanism will not be used. 00:25:33.033 Running I/O for 60 seconds... 00:25:33.292 [2024-11-29 12:09:38.673943] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:33.292 [2024-11-29 12:09:38.674221] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.292 12:09:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.550 12:09:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:33.550 "name": "raid_bdev1", 00:25:33.550 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:33.550 "strip_size_kb": 0, 00:25:33.550 "state": "online", 00:25:33.550 "raid_level": "raid1", 00:25:33.550 "superblock": true, 00:25:33.550 "num_base_bdevs": 4, 00:25:33.550 "num_base_bdevs_discovered": 3, 00:25:33.550 "num_base_bdevs_operational": 3, 00:25:33.550 "base_bdevs_list": [ 00:25:33.550 { 00:25:33.550 "name": null, 00:25:33.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.550 "is_configured": false, 00:25:33.550 "data_offset": 2048, 00:25:33.550 "data_size": 63488 00:25:33.550 }, 00:25:33.550 { 00:25:33.550 "name": "BaseBdev2", 00:25:33.550 "uuid": "80f904b5-b44d-532a-ba58-4217e43b90c6", 00:25:33.550 "is_configured": true, 00:25:33.550 "data_offset": 2048, 00:25:33.550 "data_size": 63488 00:25:33.550 }, 00:25:33.550 { 00:25:33.550 "name": "BaseBdev3", 00:25:33.550 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:33.550 "is_configured": true, 00:25:33.550 "data_offset": 2048, 00:25:33.550 "data_size": 63488 00:25:33.550 }, 00:25:33.550 { 00:25:33.550 "name": "BaseBdev4", 00:25:33.550 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:33.550 "is_configured": true, 00:25:33.550 "data_offset": 2048, 00:25:33.550 "data_size": 63488 00:25:33.550 } 00:25:33.550 ] 00:25:33.550 }' 00:25:33.550 12:09:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:33.550 12:09:38 -- common/autotest_common.sh@10 -- # set +x 00:25:34.116 12:09:39 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:34.374 [2024-11-29 12:09:39.867525] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:34.374 [2024-11-29 12:09:39.867603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:34.632 [2024-11-29 12:09:39.915353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:34.632 12:09:39 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:34.632 [2024-11-29 12:09:39.917749] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:34.632 [2024-11-29 12:09:40.063676] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:34.890 [2024-11-29 12:09:40.304846] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:34.890 [2024-11-29 12:09:40.305627] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:35.457 [2024-11-29 12:09:40.678121] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:35.457 [2024-11-29 12:09:40.820948] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.457 12:09:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.721 [2024-11-29 12:09:41.184727] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:35.721 [2024-11-29 12:09:41.185359] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:35.721 12:09:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.721 "name": "raid_bdev1", 00:25:35.721 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:35.721 "strip_size_kb": 0, 00:25:35.721 "state": "online", 00:25:35.721 "raid_level": "raid1", 00:25:35.721 "superblock": true, 00:25:35.721 "num_base_bdevs": 4, 00:25:35.721 "num_base_bdevs_discovered": 4, 00:25:35.721 "num_base_bdevs_operational": 4, 00:25:35.721 "process": { 00:25:35.721 "type": "rebuild", 00:25:35.721 "target": "spare", 00:25:35.721 "progress": { 00:25:35.721 "blocks": 12288, 00:25:35.721 "percent": 19 00:25:35.721 } 00:25:35.721 }, 00:25:35.721 "base_bdevs_list": [ 00:25:35.721 { 00:25:35.721 "name": "spare", 00:25:35.721 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:35.721 "is_configured": true, 00:25:35.721 "data_offset": 2048, 00:25:35.721 "data_size": 63488 00:25:35.721 }, 00:25:35.721 { 00:25:35.721 "name": "BaseBdev2", 00:25:35.721 "uuid": "80f904b5-b44d-532a-ba58-4217e43b90c6", 00:25:35.721 "is_configured": true, 00:25:35.721 "data_offset": 2048, 00:25:35.721 "data_size": 63488 00:25:35.721 }, 00:25:35.721 { 00:25:35.721 "name": "BaseBdev3", 00:25:35.721 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:35.721 "is_configured": true, 00:25:35.721 "data_offset": 2048, 00:25:35.721 "data_size": 63488 00:25:35.721 }, 00:25:35.721 { 00:25:35.721 "name": "BaseBdev4", 00:25:35.721 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:35.721 "is_configured": true, 00:25:35.721 "data_offset": 2048, 00:25:35.721 "data_size": 63488 00:25:35.721 } 00:25:35.721 ] 00:25:35.721 }' 00:25:35.721 12:09:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.983 12:09:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.983 12:09:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.983 12:09:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.983 12:09:41 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:35.983 [2024-11-29 12:09:41.308996] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:36.241 [2024-11-29 12:09:41.532498] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:36.241 [2024-11-29 12:09:41.717590] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:36.242 [2024-11-29 12:09:41.729243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.242 [2024-11-29 12:09:41.753039] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.500 12:09:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.759 12:09:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:36.759 "name": "raid_bdev1", 00:25:36.759 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:36.759 "strip_size_kb": 0, 00:25:36.759 "state": "online", 00:25:36.759 "raid_level": "raid1", 00:25:36.759 "superblock": true, 00:25:36.759 "num_base_bdevs": 4, 00:25:36.759 "num_base_bdevs_discovered": 3, 00:25:36.759 "num_base_bdevs_operational": 3, 00:25:36.759 "base_bdevs_list": [ 00:25:36.759 { 00:25:36.759 "name": null, 00:25:36.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.759 "is_configured": false, 00:25:36.759 "data_offset": 2048, 00:25:36.759 "data_size": 63488 00:25:36.759 }, 00:25:36.759 { 00:25:36.759 "name": "BaseBdev2", 00:25:36.759 "uuid": "80f904b5-b44d-532a-ba58-4217e43b90c6", 00:25:36.759 "is_configured": true, 00:25:36.759 "data_offset": 2048, 00:25:36.759 "data_size": 63488 00:25:36.759 }, 00:25:36.759 { 00:25:36.759 "name": "BaseBdev3", 00:25:36.759 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:36.759 "is_configured": true, 00:25:36.759 "data_offset": 2048, 00:25:36.759 "data_size": 63488 00:25:36.759 }, 00:25:36.759 { 00:25:36.759 "name": "BaseBdev4", 00:25:36.759 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:36.759 "is_configured": true, 00:25:36.759 "data_offset": 2048, 00:25:36.759 "data_size": 63488 00:25:36.759 } 00:25:36.759 ] 00:25:36.759 }' 00:25:36.759 12:09:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:36.759 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.325 12:09:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.584 12:09:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:37.584 "name": "raid_bdev1", 00:25:37.584 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:37.584 "strip_size_kb": 0, 00:25:37.584 "state": "online", 00:25:37.584 "raid_level": "raid1", 00:25:37.584 "superblock": true, 00:25:37.584 "num_base_bdevs": 4, 00:25:37.584 "num_base_bdevs_discovered": 3, 00:25:37.584 "num_base_bdevs_operational": 3, 00:25:37.584 "base_bdevs_list": [ 00:25:37.584 { 00:25:37.584 "name": null, 00:25:37.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.584 "is_configured": false, 00:25:37.584 "data_offset": 2048, 00:25:37.584 "data_size": 63488 00:25:37.584 }, 00:25:37.584 { 00:25:37.584 "name": "BaseBdev2", 00:25:37.584 "uuid": "80f904b5-b44d-532a-ba58-4217e43b90c6", 00:25:37.584 "is_configured": true, 00:25:37.584 "data_offset": 2048, 00:25:37.584 "data_size": 63488 00:25:37.584 }, 00:25:37.584 { 00:25:37.584 "name": "BaseBdev3", 00:25:37.584 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:37.584 "is_configured": true, 00:25:37.584 "data_offset": 2048, 00:25:37.584 "data_size": 63488 00:25:37.584 }, 00:25:37.584 { 00:25:37.584 "name": "BaseBdev4", 00:25:37.584 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:37.584 "is_configured": true, 00:25:37.584 "data_offset": 2048, 00:25:37.584 "data_size": 63488 00:25:37.584 } 00:25:37.584 ] 00:25:37.584 }' 00:25:37.584 12:09:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:37.843 12:09:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:37.843 12:09:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:37.843 12:09:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:37.843 12:09:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:38.101 [2024-11-29 12:09:43.409717] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:38.101 [2024-11-29 12:09:43.409795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:38.101 12:09:43 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:38.101 [2024-11-29 12:09:43.457499] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:25:38.101 [2024-11-29 12:09:43.459955] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:38.101 [2024-11-29 12:09:43.588528] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:38.101 [2024-11-29 12:09:43.589985] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:38.360 [2024-11-29 12:09:43.833598] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:38.360 [2024-11-29 12:09:43.833997] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:38.927 [2024-11-29 12:09:44.301681] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:38.927 [2024-11-29 12:09:44.302514] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.186 12:09:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.186 [2024-11-29 12:09:44.637291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.444 "name": "raid_bdev1", 00:25:39.444 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:39.444 "strip_size_kb": 0, 00:25:39.444 "state": "online", 00:25:39.444 "raid_level": "raid1", 00:25:39.444 "superblock": true, 00:25:39.444 "num_base_bdevs": 4, 00:25:39.444 "num_base_bdevs_discovered": 4, 00:25:39.444 "num_base_bdevs_operational": 4, 00:25:39.444 "process": { 00:25:39.444 "type": "rebuild", 00:25:39.444 "target": "spare", 00:25:39.444 "progress": { 00:25:39.444 "blocks": 14336, 00:25:39.444 "percent": 22 00:25:39.444 } 00:25:39.444 }, 00:25:39.444 "base_bdevs_list": [ 00:25:39.444 { 00:25:39.444 "name": "spare", 00:25:39.444 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:39.444 "is_configured": true, 00:25:39.444 "data_offset": 2048, 00:25:39.444 "data_size": 63488 00:25:39.444 }, 00:25:39.444 { 00:25:39.444 "name": "BaseBdev2", 00:25:39.444 "uuid": "80f904b5-b44d-532a-ba58-4217e43b90c6", 00:25:39.444 "is_configured": true, 00:25:39.444 "data_offset": 2048, 00:25:39.444 "data_size": 63488 00:25:39.444 }, 00:25:39.444 { 00:25:39.444 "name": "BaseBdev3", 00:25:39.444 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:39.444 "is_configured": true, 00:25:39.444 "data_offset": 2048, 00:25:39.444 "data_size": 63488 00:25:39.444 }, 00:25:39.444 { 00:25:39.444 "name": "BaseBdev4", 00:25:39.444 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:39.444 "is_configured": true, 00:25:39.444 "data_offset": 2048, 00:25:39.444 "data_size": 63488 00:25:39.444 } 00:25:39.444 ] 00:25:39.444 }' 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.444 [2024-11-29 12:09:44.766717] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:39.444 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:39.444 12:09:44 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:39.702 [2024-11-29 12:09:45.057613] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:39.702 [2024-11-29 12:09:45.152161] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:39.702 [2024-11-29 12:09:45.160657] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000026d0 00:25:39.702 [2024-11-29 12:09:45.160713] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002940 00:25:39.702 [2024-11-29 12:09:45.160768] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:39.702 [2024-11-29 12:09:45.161613] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.959 12:09:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.959 [2024-11-29 12:09:45.363789] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:39.959 [2024-11-29 12:09:45.364148] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:40.223 "name": "raid_bdev1", 00:25:40.223 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:40.223 "strip_size_kb": 0, 00:25:40.223 "state": "online", 00:25:40.223 "raid_level": "raid1", 00:25:40.223 "superblock": true, 00:25:40.223 "num_base_bdevs": 4, 00:25:40.223 "num_base_bdevs_discovered": 3, 00:25:40.223 "num_base_bdevs_operational": 3, 00:25:40.223 "process": { 00:25:40.223 "type": "rebuild", 00:25:40.223 "target": "spare", 00:25:40.223 "progress": { 00:25:40.223 "blocks": 24576, 00:25:40.223 "percent": 38 00:25:40.223 } 00:25:40.223 }, 00:25:40.223 "base_bdevs_list": [ 00:25:40.223 { 00:25:40.223 "name": "spare", 00:25:40.223 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:40.223 "is_configured": true, 00:25:40.223 "data_offset": 2048, 00:25:40.223 "data_size": 63488 00:25:40.223 }, 00:25:40.223 { 00:25:40.223 "name": null, 00:25:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.223 "is_configured": false, 00:25:40.223 "data_offset": 2048, 00:25:40.223 "data_size": 63488 00:25:40.223 }, 00:25:40.223 { 00:25:40.223 "name": "BaseBdev3", 00:25:40.223 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:40.223 "is_configured": true, 00:25:40.223 "data_offset": 2048, 00:25:40.223 "data_size": 63488 00:25:40.223 }, 00:25:40.223 { 00:25:40.223 "name": "BaseBdev4", 00:25:40.223 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:40.223 "is_configured": true, 00:25:40.223 "data_offset": 2048, 00:25:40.223 "data_size": 63488 00:25:40.223 } 00:25:40.223 ] 00:25:40.223 }' 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:40.223 [2024-11-29 12:09:45.625768] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@657 -- # local timeout=559 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.223 12:09:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.481 [2024-11-29 12:09:45.746426] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:40.481 [2024-11-29 12:09:45.746978] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:40.481 12:09:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:40.481 "name": "raid_bdev1", 00:25:40.481 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:40.481 "strip_size_kb": 0, 00:25:40.481 "state": "online", 00:25:40.481 "raid_level": "raid1", 00:25:40.481 "superblock": true, 00:25:40.481 "num_base_bdevs": 4, 00:25:40.481 "num_base_bdevs_discovered": 3, 00:25:40.481 "num_base_bdevs_operational": 3, 00:25:40.481 "process": { 00:25:40.481 "type": "rebuild", 00:25:40.481 "target": "spare", 00:25:40.481 "progress": { 00:25:40.481 "blocks": 28672, 00:25:40.481 "percent": 45 00:25:40.481 } 00:25:40.481 }, 00:25:40.481 "base_bdevs_list": [ 00:25:40.481 { 00:25:40.481 "name": "spare", 00:25:40.481 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:40.481 "is_configured": true, 00:25:40.481 "data_offset": 2048, 00:25:40.481 "data_size": 63488 00:25:40.481 }, 00:25:40.481 { 00:25:40.481 "name": null, 00:25:40.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.481 "is_configured": false, 00:25:40.481 "data_offset": 2048, 00:25:40.481 "data_size": 63488 00:25:40.481 }, 00:25:40.481 { 00:25:40.481 "name": "BaseBdev3", 00:25:40.481 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:40.481 "is_configured": true, 00:25:40.481 "data_offset": 2048, 00:25:40.481 "data_size": 63488 00:25:40.481 }, 00:25:40.481 { 00:25:40.481 "name": "BaseBdev4", 00:25:40.481 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:40.481 "is_configured": true, 00:25:40.481 "data_offset": 2048, 00:25:40.481 "data_size": 63488 00:25:40.481 } 00:25:40.481 ] 00:25:40.481 }' 00:25:40.481 12:09:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:40.481 12:09:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.481 12:09:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:40.481 12:09:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.481 12:09:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:41.047 [2024-11-29 12:09:46.472657] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:41.615 [2024-11-29 12:09:46.831914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:25:41.615 12:09:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:41.615 12:09:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.615 12:09:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:41.616 12:09:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:41.616 12:09:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:41.616 12:09:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:41.616 12:09:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.616 12:09:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.875 12:09:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:41.875 "name": "raid_bdev1", 00:25:41.875 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:41.875 "strip_size_kb": 0, 00:25:41.875 "state": "online", 00:25:41.875 "raid_level": "raid1", 00:25:41.875 "superblock": true, 00:25:41.875 "num_base_bdevs": 4, 00:25:41.875 "num_base_bdevs_discovered": 3, 00:25:41.875 "num_base_bdevs_operational": 3, 00:25:41.875 "process": { 00:25:41.875 "type": "rebuild", 00:25:41.875 "target": "spare", 00:25:41.875 "progress": { 00:25:41.875 "blocks": 49152, 00:25:41.875 "percent": 77 00:25:41.875 } 00:25:41.875 }, 00:25:41.875 "base_bdevs_list": [ 00:25:41.875 { 00:25:41.875 "name": "spare", 00:25:41.875 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:41.875 "is_configured": true, 00:25:41.875 "data_offset": 2048, 00:25:41.875 "data_size": 63488 00:25:41.875 }, 00:25:41.875 { 00:25:41.875 "name": null, 00:25:41.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.875 "is_configured": false, 00:25:41.875 "data_offset": 2048, 00:25:41.875 "data_size": 63488 00:25:41.875 }, 00:25:41.875 { 00:25:41.875 "name": "BaseBdev3", 00:25:41.875 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:41.875 "is_configured": true, 00:25:41.875 "data_offset": 2048, 00:25:41.875 "data_size": 63488 00:25:41.875 }, 00:25:41.875 { 00:25:41.875 "name": "BaseBdev4", 00:25:41.875 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:41.875 "is_configured": true, 00:25:41.875 "data_offset": 2048, 00:25:41.875 "data_size": 63488 00:25:41.875 } 00:25:41.875 ] 00:25:41.875 }' 00:25:41.875 12:09:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:41.875 12:09:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.875 12:09:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:41.875 12:09:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.875 12:09:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:42.135 [2024-11-29 12:09:47.391295] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:42.702 [2024-11-29 12:09:48.063933] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:42.702 [2024-11-29 12:09:48.163910] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:42.702 [2024-11-29 12:09:48.166927] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.961 12:09:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:43.220 "name": "raid_bdev1", 00:25:43.220 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:43.220 "strip_size_kb": 0, 00:25:43.220 "state": "online", 00:25:43.220 "raid_level": "raid1", 00:25:43.220 "superblock": true, 00:25:43.220 "num_base_bdevs": 4, 00:25:43.220 "num_base_bdevs_discovered": 3, 00:25:43.220 "num_base_bdevs_operational": 3, 00:25:43.220 "base_bdevs_list": [ 00:25:43.220 { 00:25:43.220 "name": "spare", 00:25:43.220 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:43.220 "is_configured": true, 00:25:43.220 "data_offset": 2048, 00:25:43.220 "data_size": 63488 00:25:43.220 }, 00:25:43.220 { 00:25:43.220 "name": null, 00:25:43.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.220 "is_configured": false, 00:25:43.220 "data_offset": 2048, 00:25:43.220 "data_size": 63488 00:25:43.220 }, 00:25:43.220 { 00:25:43.220 "name": "BaseBdev3", 00:25:43.220 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:43.220 "is_configured": true, 00:25:43.220 "data_offset": 2048, 00:25:43.220 "data_size": 63488 00:25:43.220 }, 00:25:43.220 { 00:25:43.220 "name": "BaseBdev4", 00:25:43.220 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:43.220 "is_configured": true, 00:25:43.220 "data_offset": 2048, 00:25:43.220 "data_size": 63488 00:25:43.220 } 00:25:43.220 ] 00:25:43.220 }' 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@660 -- # break 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.220 12:09:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.479 12:09:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:43.479 "name": "raid_bdev1", 00:25:43.479 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:43.479 "strip_size_kb": 0, 00:25:43.479 "state": "online", 00:25:43.479 "raid_level": "raid1", 00:25:43.479 "superblock": true, 00:25:43.479 "num_base_bdevs": 4, 00:25:43.479 "num_base_bdevs_discovered": 3, 00:25:43.479 "num_base_bdevs_operational": 3, 00:25:43.479 "base_bdevs_list": [ 00:25:43.479 { 00:25:43.479 "name": "spare", 00:25:43.479 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:43.479 "is_configured": true, 00:25:43.479 "data_offset": 2048, 00:25:43.479 "data_size": 63488 00:25:43.479 }, 00:25:43.479 { 00:25:43.479 "name": null, 00:25:43.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.479 "is_configured": false, 00:25:43.479 "data_offset": 2048, 00:25:43.479 "data_size": 63488 00:25:43.479 }, 00:25:43.479 { 00:25:43.479 "name": "BaseBdev3", 00:25:43.479 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:43.479 "is_configured": true, 00:25:43.479 "data_offset": 2048, 00:25:43.480 "data_size": 63488 00:25:43.480 }, 00:25:43.480 { 00:25:43.480 "name": "BaseBdev4", 00:25:43.480 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:43.480 "is_configured": true, 00:25:43.480 "data_offset": 2048, 00:25:43.480 "data_size": 63488 00:25:43.480 } 00:25:43.480 ] 00:25:43.480 }' 00:25:43.480 12:09:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.738 12:09:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.996 12:09:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:43.996 "name": "raid_bdev1", 00:25:43.996 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:43.996 "strip_size_kb": 0, 00:25:43.996 "state": "online", 00:25:43.996 "raid_level": "raid1", 00:25:43.996 "superblock": true, 00:25:43.996 "num_base_bdevs": 4, 00:25:43.996 "num_base_bdevs_discovered": 3, 00:25:43.996 "num_base_bdevs_operational": 3, 00:25:43.996 "base_bdevs_list": [ 00:25:43.996 { 00:25:43.996 "name": "spare", 00:25:43.996 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:43.996 "is_configured": true, 00:25:43.996 "data_offset": 2048, 00:25:43.996 "data_size": 63488 00:25:43.996 }, 00:25:43.996 { 00:25:43.996 "name": null, 00:25:43.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.996 "is_configured": false, 00:25:43.996 "data_offset": 2048, 00:25:43.996 "data_size": 63488 00:25:43.996 }, 00:25:43.996 { 00:25:43.996 "name": "BaseBdev3", 00:25:43.996 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:43.996 "is_configured": true, 00:25:43.996 "data_offset": 2048, 00:25:43.996 "data_size": 63488 00:25:43.996 }, 00:25:43.996 { 00:25:43.996 "name": "BaseBdev4", 00:25:43.996 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:43.996 "is_configured": true, 00:25:43.996 "data_offset": 2048, 00:25:43.996 "data_size": 63488 00:25:43.996 } 00:25:43.996 ] 00:25:43.996 }' 00:25:43.996 12:09:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:43.996 12:09:49 -- common/autotest_common.sh@10 -- # set +x 00:25:44.563 12:09:49 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:44.822 [2024-11-29 12:09:50.166956] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:44.822 [2024-11-29 12:09:50.167011] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:44.822 00:25:44.822 Latency(us) 00:25:44.822 [2024-11-29T12:09:50.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.822 [2024-11-29T12:09:50.333Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:44.822 raid_bdev1 : 11.77 90.51 271.53 0.00 0.00 15189.58 351.88 121539.49 00:25:44.822 [2024-11-29T12:09:50.333Z] =================================================================================================================== 00:25:44.822 [2024-11-29T12:09:50.333Z] Total : 90.51 271.53 0.00 0.00 15189.58 351.88 121539.49 00:25:44.822 [2024-11-29 12:09:50.271837] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.822 [2024-11-29 12:09:50.271917] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:44.822 [2024-11-29 12:09:50.272050] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:44.822 [2024-11-29 12:09:50.272068] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:44.822 0 00:25:44.822 12:09:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:44.822 12:09:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.080 12:09:50 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:45.080 12:09:50 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:45.080 12:09:50 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@12 -- # local i 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:45.080 12:09:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:45.339 /dev/nbd0 00:25:45.339 12:09:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:45.339 12:09:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:45.339 12:09:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:45.339 12:09:50 -- common/autotest_common.sh@867 -- # local i 00:25:45.339 12:09:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:45.339 12:09:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:45.339 12:09:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:45.339 12:09:50 -- common/autotest_common.sh@871 -- # break 00:25:45.339 12:09:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:45.339 12:09:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:45.339 12:09:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.339 1+0 records in 00:25:45.339 1+0 records out 00:25:45.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048507 s, 8.4 MB/s 00:25:45.598 12:09:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.598 12:09:50 -- common/autotest_common.sh@884 -- # size=4096 00:25:45.598 12:09:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.598 12:09:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:45.598 12:09:50 -- common/autotest_common.sh@887 -- # return 0 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:45.598 12:09:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:45.598 12:09:50 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:45.598 12:09:50 -- bdev/bdev_raid.sh@678 -- # continue 00:25:45.598 12:09:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:45.598 12:09:50 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:45.598 12:09:50 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@12 -- # local i 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:45.598 12:09:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:45.856 /dev/nbd1 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:45.856 12:09:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:45.856 12:09:51 -- common/autotest_common.sh@867 -- # local i 00:25:45.856 12:09:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:45.856 12:09:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:45.856 12:09:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:45.856 12:09:51 -- common/autotest_common.sh@871 -- # break 00:25:45.856 12:09:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:45.856 12:09:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:45.856 12:09:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.856 1+0 records in 00:25:45.856 1+0 records out 00:25:45.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424651 s, 9.6 MB/s 00:25:45.856 12:09:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.856 12:09:51 -- common/autotest_common.sh@884 -- # size=4096 00:25:45.856 12:09:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.856 12:09:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:45.856 12:09:51 -- common/autotest_common.sh@887 -- # return 0 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:45.856 12:09:51 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:45.856 12:09:51 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@51 -- # local i 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:45.856 12:09:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@41 -- # break 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.115 12:09:51 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:46.115 12:09:51 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:46.115 12:09:51 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@12 -- # local i 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.115 12:09:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:46.373 /dev/nbd1 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:46.373 12:09:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:46.373 12:09:51 -- common/autotest_common.sh@867 -- # local i 00:25:46.373 12:09:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:46.373 12:09:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:46.373 12:09:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:46.373 12:09:51 -- common/autotest_common.sh@871 -- # break 00:25:46.373 12:09:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:46.373 12:09:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:46.373 12:09:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.373 1+0 records in 00:25:46.373 1+0 records out 00:25:46.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375646 s, 10.9 MB/s 00:25:46.373 12:09:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.373 12:09:51 -- common/autotest_common.sh@884 -- # size=4096 00:25:46.373 12:09:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.373 12:09:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:46.373 12:09:51 -- common/autotest_common.sh@887 -- # return 0 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.373 12:09:51 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:46.373 12:09:51 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@51 -- # local i 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:46.373 12:09:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@41 -- # break 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.631 12:09:52 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@51 -- # local i 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:46.631 12:09:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@41 -- # break 00:25:46.890 12:09:52 -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.890 12:09:52 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:46.890 12:09:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:46.890 12:09:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:46.890 12:09:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:47.148 12:09:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:47.407 [2024-11-29 12:09:52.862784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:47.407 [2024-11-29 12:09:52.862898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.407 [2024-11-29 12:09:52.862947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:47.407 [2024-11-29 12:09:52.862973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.407 [2024-11-29 12:09:52.865572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.407 [2024-11-29 12:09:52.865650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:47.407 [2024-11-29 12:09:52.865760] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:47.407 [2024-11-29 12:09:52.865833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:47.407 BaseBdev1 00:25:47.407 12:09:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:47.407 12:09:52 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:25:47.407 12:09:52 -- bdev/bdev_raid.sh@696 -- # continue 00:25:47.407 12:09:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:47.407 12:09:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:47.407 12:09:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:47.665 12:09:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:47.923 [2024-11-29 12:09:53.422962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:47.923 [2024-11-29 12:09:53.423081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.923 [2024-11-29 12:09:53.423131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:47.923 [2024-11-29 12:09:53.423157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.923 [2024-11-29 12:09:53.423647] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.923 [2024-11-29 12:09:53.423728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:47.923 [2024-11-29 12:09:53.423826] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:47.923 [2024-11-29 12:09:53.423842] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:25:47.923 [2024-11-29 12:09:53.423850] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.923 [2024-11-29 12:09:53.423885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:25:47.923 [2024-11-29 12:09:53.423950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.923 BaseBdev3 00:25:48.182 12:09:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:48.182 12:09:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:48.182 12:09:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:48.441 12:09:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:48.441 [2024-11-29 12:09:53.919174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:48.441 [2024-11-29 12:09:53.919288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.441 [2024-11-29 12:09:53.919337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:48.441 [2024-11-29 12:09:53.919369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.441 [2024-11-29 12:09:53.919859] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.441 [2024-11-29 12:09:53.919931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:48.441 [2024-11-29 12:09:53.920041] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:48.441 [2024-11-29 12:09:53.920069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:48.441 BaseBdev4 00:25:48.441 12:09:53 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:48.700 12:09:54 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:48.960 [2024-11-29 12:09:54.383352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:48.960 [2024-11-29 12:09:54.383462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.960 [2024-11-29 12:09:54.383504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:48.960 [2024-11-29 12:09:54.383535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.960 [2024-11-29 12:09:54.384067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.960 [2024-11-29 12:09:54.384142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:48.960 [2024-11-29 12:09:54.384256] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:48.960 [2024-11-29 12:09:54.384299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.960 spare 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.960 12:09:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.219 [2024-11-29 12:09:54.484447] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:25:49.219 [2024-11-29 12:09:54.484496] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:49.219 [2024-11-29 12:09:54.484713] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033bc0 00:25:49.219 [2024-11-29 12:09:54.485266] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:25:49.219 [2024-11-29 12:09:54.485295] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:25:49.219 [2024-11-29 12:09:54.485456] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.219 12:09:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:49.219 "name": "raid_bdev1", 00:25:49.219 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:49.219 "strip_size_kb": 0, 00:25:49.219 "state": "online", 00:25:49.219 "raid_level": "raid1", 00:25:49.219 "superblock": true, 00:25:49.219 "num_base_bdevs": 4, 00:25:49.219 "num_base_bdevs_discovered": 3, 00:25:49.219 "num_base_bdevs_operational": 3, 00:25:49.219 "base_bdevs_list": [ 00:25:49.219 { 00:25:49.219 "name": "spare", 00:25:49.219 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:49.219 "is_configured": true, 00:25:49.219 "data_offset": 2048, 00:25:49.219 "data_size": 63488 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "name": null, 00:25:49.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.219 "is_configured": false, 00:25:49.219 "data_offset": 2048, 00:25:49.219 "data_size": 63488 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "name": "BaseBdev3", 00:25:49.219 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:49.219 "is_configured": true, 00:25:49.219 "data_offset": 2048, 00:25:49.219 "data_size": 63488 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "name": "BaseBdev4", 00:25:49.219 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:49.219 "is_configured": true, 00:25:49.219 "data_offset": 2048, 00:25:49.219 "data_size": 63488 00:25:49.219 } 00:25:49.219 ] 00:25:49.219 }' 00:25:49.219 12:09:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:49.219 12:09:54 -- common/autotest_common.sh@10 -- # set +x 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:50.151 "name": "raid_bdev1", 00:25:50.151 "uuid": "c087ab8a-039e-4050-a168-e3383c552b79", 00:25:50.151 "strip_size_kb": 0, 00:25:50.151 "state": "online", 00:25:50.151 "raid_level": "raid1", 00:25:50.151 "superblock": true, 00:25:50.151 "num_base_bdevs": 4, 00:25:50.151 "num_base_bdevs_discovered": 3, 00:25:50.151 "num_base_bdevs_operational": 3, 00:25:50.151 "base_bdevs_list": [ 00:25:50.151 { 00:25:50.151 "name": "spare", 00:25:50.151 "uuid": "0db6708d-b074-5a7a-8893-79b3a972f952", 00:25:50.151 "is_configured": true, 00:25:50.151 "data_offset": 2048, 00:25:50.151 "data_size": 63488 00:25:50.151 }, 00:25:50.151 { 00:25:50.151 "name": null, 00:25:50.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.151 "is_configured": false, 00:25:50.151 "data_offset": 2048, 00:25:50.151 "data_size": 63488 00:25:50.151 }, 00:25:50.151 { 00:25:50.151 "name": "BaseBdev3", 00:25:50.151 "uuid": "ac114d53-b7d2-5d31-9852-f1d30ffe68e1", 00:25:50.151 "is_configured": true, 00:25:50.151 "data_offset": 2048, 00:25:50.151 "data_size": 63488 00:25:50.151 }, 00:25:50.151 { 00:25:50.151 "name": "BaseBdev4", 00:25:50.151 "uuid": "79881db4-7645-578b-a940-07a802df658c", 00:25:50.151 "is_configured": true, 00:25:50.151 "data_offset": 2048, 00:25:50.151 "data_size": 63488 00:25:50.151 } 00:25:50.151 ] 00:25:50.151 }' 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:50.151 12:09:55 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.410 12:09:55 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.410 12:09:55 -- bdev/bdev_raid.sh@709 -- # killprocess 137757 00:25:50.410 12:09:55 -- common/autotest_common.sh@936 -- # '[' -z 137757 ']' 00:25:50.410 12:09:55 -- common/autotest_common.sh@940 -- # kill -0 137757 00:25:50.410 12:09:55 -- common/autotest_common.sh@941 -- # uname 00:25:50.410 12:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:50.410 12:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137757 00:25:50.410 12:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:50.410 12:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:50.410 12:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137757' 00:25:50.410 killing process with pid 137757 00:25:50.667 12:09:55 -- common/autotest_common.sh@955 -- # kill 137757 00:25:50.667 Received shutdown signal, test time was about 17.424327 seconds 00:25:50.667 00:25:50.667 Latency(us) 00:25:50.667 [2024-11-29T12:09:56.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.667 [2024-11-29T12:09:56.178Z] =================================================================================================================== 00:25:50.667 [2024-11-29T12:09:56.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.667 [2024-11-29 12:09:55.924490] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:50.667 [2024-11-29 12:09:55.924586] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.667 [2024-11-29 12:09:55.924705] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.667 [2024-11-29 12:09:55.924729] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:25:50.667 12:09:55 -- common/autotest_common.sh@960 -- # wait 137757 00:25:50.667 [2024-11-29 12:09:55.980461] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:50.925 00:25:50.925 real 0m23.240s 00:25:50.925 user 0m38.404s 00:25:50.925 sys 0m3.035s 00:25:50.925 12:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:50.925 12:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:50.925 ************************************ 00:25:50.925 END TEST raid_rebuild_test_sb_io 00:25:50.925 ************************************ 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:50.925 12:09:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:50.925 12:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.925 12:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:50.925 ************************************ 00:25:50.925 START TEST raid5f_state_function_test 00:25:50.925 ************************************ 00:25:50.925 12:09:56 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:50.925 12:09:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=138370 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138370' 00:25:50.926 Process raid pid: 138370 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:50.926 12:09:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138370 /var/tmp/spdk-raid.sock 00:25:50.926 12:09:56 -- common/autotest_common.sh@829 -- # '[' -z 138370 ']' 00:25:50.926 12:09:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:50.926 12:09:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.926 12:09:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:50.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:50.926 12:09:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.926 12:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:50.926 [2024-11-29 12:09:56.372879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:50.926 [2024-11-29 12:09:56.373088] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.185 [2024-11-29 12:09:56.514390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.185 [2024-11-29 12:09:56.610601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.185 [2024-11-29 12:09:56.665591] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:52.121 12:09:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.121 12:09:57 -- common/autotest_common.sh@862 -- # return 0 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:52.121 [2024-11-29 12:09:57.539694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:52.121 [2024-11-29 12:09:57.539808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:52.121 [2024-11-29 12:09:57.539825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:52.121 [2024-11-29 12:09:57.539846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:52.121 [2024-11-29 12:09:57.539854] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:52.121 [2024-11-29 12:09:57.539906] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.121 12:09:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.380 12:09:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:52.380 "name": "Existed_Raid", 00:25:52.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.380 "strip_size_kb": 64, 00:25:52.380 "state": "configuring", 00:25:52.380 "raid_level": "raid5f", 00:25:52.380 "superblock": false, 00:25:52.380 "num_base_bdevs": 3, 00:25:52.380 "num_base_bdevs_discovered": 0, 00:25:52.380 "num_base_bdevs_operational": 3, 00:25:52.380 "base_bdevs_list": [ 00:25:52.380 { 00:25:52.380 "name": "BaseBdev1", 00:25:52.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.380 "is_configured": false, 00:25:52.380 "data_offset": 0, 00:25:52.380 "data_size": 0 00:25:52.380 }, 00:25:52.380 { 00:25:52.380 "name": "BaseBdev2", 00:25:52.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.380 "is_configured": false, 00:25:52.380 "data_offset": 0, 00:25:52.380 "data_size": 0 00:25:52.380 }, 00:25:52.380 { 00:25:52.380 "name": "BaseBdev3", 00:25:52.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.380 "is_configured": false, 00:25:52.380 "data_offset": 0, 00:25:52.380 "data_size": 0 00:25:52.380 } 00:25:52.380 ] 00:25:52.380 }' 00:25:52.380 12:09:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:52.380 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:25:53.315 12:09:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:53.315 [2024-11-29 12:09:58.691783] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:53.315 [2024-11-29 12:09:58.691844] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:25:53.315 12:09:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:53.616 [2024-11-29 12:09:58.923863] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:53.616 [2024-11-29 12:09:58.923958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:53.616 [2024-11-29 12:09:58.923972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:53.616 [2024-11-29 12:09:58.923997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:53.616 [2024-11-29 12:09:58.924005] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:53.616 [2024-11-29 12:09:58.924035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:53.616 12:09:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:53.892 [2024-11-29 12:09:59.175715] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:53.892 BaseBdev1 00:25:53.892 12:09:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:53.892 12:09:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:53.892 12:09:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:53.892 12:09:59 -- common/autotest_common.sh@899 -- # local i 00:25:53.892 12:09:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:53.892 12:09:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:53.892 12:09:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.150 12:09:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:54.150 [ 00:25:54.150 { 00:25:54.150 "name": "BaseBdev1", 00:25:54.150 "aliases": [ 00:25:54.150 "4e56eacc-0332-4e0a-a730-a556f3b29276" 00:25:54.150 ], 00:25:54.150 "product_name": "Malloc disk", 00:25:54.150 "block_size": 512, 00:25:54.150 "num_blocks": 65536, 00:25:54.150 "uuid": "4e56eacc-0332-4e0a-a730-a556f3b29276", 00:25:54.150 "assigned_rate_limits": { 00:25:54.150 "rw_ios_per_sec": 0, 00:25:54.150 "rw_mbytes_per_sec": 0, 00:25:54.150 "r_mbytes_per_sec": 0, 00:25:54.150 "w_mbytes_per_sec": 0 00:25:54.150 }, 00:25:54.150 "claimed": true, 00:25:54.150 "claim_type": "exclusive_write", 00:25:54.150 "zoned": false, 00:25:54.150 "supported_io_types": { 00:25:54.150 "read": true, 00:25:54.150 "write": true, 00:25:54.150 "unmap": true, 00:25:54.150 "write_zeroes": true, 00:25:54.150 "flush": true, 00:25:54.150 "reset": true, 00:25:54.150 "compare": false, 00:25:54.150 "compare_and_write": false, 00:25:54.150 "abort": true, 00:25:54.150 "nvme_admin": false, 00:25:54.150 "nvme_io": false 00:25:54.150 }, 00:25:54.150 "memory_domains": [ 00:25:54.150 { 00:25:54.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.150 "dma_device_type": 2 00:25:54.150 } 00:25:54.150 ], 00:25:54.150 "driver_specific": {} 00:25:54.150 } 00:25:54.150 ] 00:25:54.408 12:09:59 -- common/autotest_common.sh@905 -- # return 0 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.408 12:09:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.667 12:09:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:54.667 "name": "Existed_Raid", 00:25:54.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.667 "strip_size_kb": 64, 00:25:54.667 "state": "configuring", 00:25:54.667 "raid_level": "raid5f", 00:25:54.667 "superblock": false, 00:25:54.667 "num_base_bdevs": 3, 00:25:54.667 "num_base_bdevs_discovered": 1, 00:25:54.667 "num_base_bdevs_operational": 3, 00:25:54.667 "base_bdevs_list": [ 00:25:54.667 { 00:25:54.667 "name": "BaseBdev1", 00:25:54.667 "uuid": "4e56eacc-0332-4e0a-a730-a556f3b29276", 00:25:54.667 "is_configured": true, 00:25:54.667 "data_offset": 0, 00:25:54.667 "data_size": 65536 00:25:54.667 }, 00:25:54.667 { 00:25:54.667 "name": "BaseBdev2", 00:25:54.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.667 "is_configured": false, 00:25:54.667 "data_offset": 0, 00:25:54.667 "data_size": 0 00:25:54.667 }, 00:25:54.667 { 00:25:54.667 "name": "BaseBdev3", 00:25:54.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.667 "is_configured": false, 00:25:54.667 "data_offset": 0, 00:25:54.667 "data_size": 0 00:25:54.667 } 00:25:54.667 ] 00:25:54.667 }' 00:25:54.667 12:09:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:54.667 12:09:59 -- common/autotest_common.sh@10 -- # set +x 00:25:55.232 12:10:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:55.491 [2024-11-29 12:10:00.820127] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.491 [2024-11-29 12:10:00.820216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:25:55.491 12:10:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:55.491 12:10:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:55.750 [2024-11-29 12:10:01.060279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.750 [2024-11-29 12:10:01.062606] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.750 [2024-11-29 12:10:01.062679] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.750 [2024-11-29 12:10:01.062692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:55.750 [2024-11-29 12:10:01.062722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.750 12:10:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.008 12:10:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:56.008 "name": "Existed_Raid", 00:25:56.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.008 "strip_size_kb": 64, 00:25:56.008 "state": "configuring", 00:25:56.008 "raid_level": "raid5f", 00:25:56.008 "superblock": false, 00:25:56.008 "num_base_bdevs": 3, 00:25:56.008 "num_base_bdevs_discovered": 1, 00:25:56.008 "num_base_bdevs_operational": 3, 00:25:56.008 "base_bdevs_list": [ 00:25:56.008 { 00:25:56.008 "name": "BaseBdev1", 00:25:56.008 "uuid": "4e56eacc-0332-4e0a-a730-a556f3b29276", 00:25:56.008 "is_configured": true, 00:25:56.008 "data_offset": 0, 00:25:56.008 "data_size": 65536 00:25:56.008 }, 00:25:56.008 { 00:25:56.008 "name": "BaseBdev2", 00:25:56.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.008 "is_configured": false, 00:25:56.008 "data_offset": 0, 00:25:56.008 "data_size": 0 00:25:56.008 }, 00:25:56.008 { 00:25:56.008 "name": "BaseBdev3", 00:25:56.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.008 "is_configured": false, 00:25:56.008 "data_offset": 0, 00:25:56.008 "data_size": 0 00:25:56.008 } 00:25:56.008 ] 00:25:56.008 }' 00:25:56.008 12:10:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:56.008 12:10:01 -- common/autotest_common.sh@10 -- # set +x 00:25:56.576 12:10:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:56.834 [2024-11-29 12:10:02.314815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:56.834 BaseBdev2 00:25:56.834 12:10:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:56.834 12:10:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:56.834 12:10:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:56.834 12:10:02 -- common/autotest_common.sh@899 -- # local i 00:25:56.834 12:10:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:56.834 12:10:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:56.834 12:10:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:57.092 12:10:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:57.350 [ 00:25:57.350 { 00:25:57.350 "name": "BaseBdev2", 00:25:57.350 "aliases": [ 00:25:57.350 "a2f19068-a584-417d-8e30-71cabb10688f" 00:25:57.350 ], 00:25:57.350 "product_name": "Malloc disk", 00:25:57.350 "block_size": 512, 00:25:57.350 "num_blocks": 65536, 00:25:57.350 "uuid": "a2f19068-a584-417d-8e30-71cabb10688f", 00:25:57.350 "assigned_rate_limits": { 00:25:57.350 "rw_ios_per_sec": 0, 00:25:57.350 "rw_mbytes_per_sec": 0, 00:25:57.350 "r_mbytes_per_sec": 0, 00:25:57.350 "w_mbytes_per_sec": 0 00:25:57.350 }, 00:25:57.350 "claimed": true, 00:25:57.350 "claim_type": "exclusive_write", 00:25:57.350 "zoned": false, 00:25:57.350 "supported_io_types": { 00:25:57.350 "read": true, 00:25:57.350 "write": true, 00:25:57.350 "unmap": true, 00:25:57.350 "write_zeroes": true, 00:25:57.350 "flush": true, 00:25:57.350 "reset": true, 00:25:57.350 "compare": false, 00:25:57.350 "compare_and_write": false, 00:25:57.350 "abort": true, 00:25:57.350 "nvme_admin": false, 00:25:57.350 "nvme_io": false 00:25:57.350 }, 00:25:57.350 "memory_domains": [ 00:25:57.350 { 00:25:57.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.350 "dma_device_type": 2 00:25:57.350 } 00:25:57.350 ], 00:25:57.350 "driver_specific": {} 00:25:57.350 } 00:25:57.350 ] 00:25:57.350 12:10:02 -- common/autotest_common.sh@905 -- # return 0 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.350 12:10:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.609 12:10:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.609 "name": "Existed_Raid", 00:25:57.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.609 "strip_size_kb": 64, 00:25:57.609 "state": "configuring", 00:25:57.609 "raid_level": "raid5f", 00:25:57.609 "superblock": false, 00:25:57.609 "num_base_bdevs": 3, 00:25:57.609 "num_base_bdevs_discovered": 2, 00:25:57.609 "num_base_bdevs_operational": 3, 00:25:57.609 "base_bdevs_list": [ 00:25:57.609 { 00:25:57.609 "name": "BaseBdev1", 00:25:57.609 "uuid": "4e56eacc-0332-4e0a-a730-a556f3b29276", 00:25:57.609 "is_configured": true, 00:25:57.609 "data_offset": 0, 00:25:57.609 "data_size": 65536 00:25:57.609 }, 00:25:57.609 { 00:25:57.609 "name": "BaseBdev2", 00:25:57.609 "uuid": "a2f19068-a584-417d-8e30-71cabb10688f", 00:25:57.609 "is_configured": true, 00:25:57.609 "data_offset": 0, 00:25:57.609 "data_size": 65536 00:25:57.609 }, 00:25:57.609 { 00:25:57.609 "name": "BaseBdev3", 00:25:57.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.609 "is_configured": false, 00:25:57.609 "data_offset": 0, 00:25:57.609 "data_size": 0 00:25:57.609 } 00:25:57.609 ] 00:25:57.609 }' 00:25:57.609 12:10:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.609 12:10:03 -- common/autotest_common.sh@10 -- # set +x 00:25:58.543 12:10:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:58.543 [2024-11-29 12:10:03.988640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:58.543 [2024-11-29 12:10:03.988750] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:25:58.543 [2024-11-29 12:10:03.988765] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:58.543 [2024-11-29 12:10:03.988944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:25:58.543 [2024-11-29 12:10:03.989811] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:25:58.543 [2024-11-29 12:10:03.989840] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:25:58.543 [2024-11-29 12:10:03.990133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.543 BaseBdev3 00:25:58.543 12:10:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:58.543 12:10:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:58.543 12:10:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:58.543 12:10:04 -- common/autotest_common.sh@899 -- # local i 00:25:58.543 12:10:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:58.543 12:10:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:58.543 12:10:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:58.801 12:10:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:59.059 [ 00:25:59.059 { 00:25:59.059 "name": "BaseBdev3", 00:25:59.059 "aliases": [ 00:25:59.059 "f52ae366-6bfe-45c1-898d-80745e580059" 00:25:59.059 ], 00:25:59.059 "product_name": "Malloc disk", 00:25:59.059 "block_size": 512, 00:25:59.059 "num_blocks": 65536, 00:25:59.059 "uuid": "f52ae366-6bfe-45c1-898d-80745e580059", 00:25:59.059 "assigned_rate_limits": { 00:25:59.059 "rw_ios_per_sec": 0, 00:25:59.059 "rw_mbytes_per_sec": 0, 00:25:59.059 "r_mbytes_per_sec": 0, 00:25:59.059 "w_mbytes_per_sec": 0 00:25:59.059 }, 00:25:59.059 "claimed": true, 00:25:59.059 "claim_type": "exclusive_write", 00:25:59.059 "zoned": false, 00:25:59.059 "supported_io_types": { 00:25:59.059 "read": true, 00:25:59.059 "write": true, 00:25:59.059 "unmap": true, 00:25:59.059 "write_zeroes": true, 00:25:59.059 "flush": true, 00:25:59.059 "reset": true, 00:25:59.059 "compare": false, 00:25:59.059 "compare_and_write": false, 00:25:59.059 "abort": true, 00:25:59.059 "nvme_admin": false, 00:25:59.059 "nvme_io": false 00:25:59.059 }, 00:25:59.059 "memory_domains": [ 00:25:59.059 { 00:25:59.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.059 "dma_device_type": 2 00:25:59.059 } 00:25:59.059 ], 00:25:59.059 "driver_specific": {} 00:25:59.059 } 00:25:59.059 ] 00:25:59.059 12:10:04 -- common/autotest_common.sh@905 -- # return 0 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.059 12:10:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.317 12:10:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:59.317 "name": "Existed_Raid", 00:25:59.317 "uuid": "66aa88ab-8124-49ca-ab38-3a9bb8f3186c", 00:25:59.317 "strip_size_kb": 64, 00:25:59.317 "state": "online", 00:25:59.317 "raid_level": "raid5f", 00:25:59.317 "superblock": false, 00:25:59.317 "num_base_bdevs": 3, 00:25:59.317 "num_base_bdevs_discovered": 3, 00:25:59.317 "num_base_bdevs_operational": 3, 00:25:59.317 "base_bdevs_list": [ 00:25:59.317 { 00:25:59.317 "name": "BaseBdev1", 00:25:59.317 "uuid": "4e56eacc-0332-4e0a-a730-a556f3b29276", 00:25:59.317 "is_configured": true, 00:25:59.317 "data_offset": 0, 00:25:59.317 "data_size": 65536 00:25:59.317 }, 00:25:59.317 { 00:25:59.317 "name": "BaseBdev2", 00:25:59.317 "uuid": "a2f19068-a584-417d-8e30-71cabb10688f", 00:25:59.317 "is_configured": true, 00:25:59.317 "data_offset": 0, 00:25:59.317 "data_size": 65536 00:25:59.317 }, 00:25:59.317 { 00:25:59.317 "name": "BaseBdev3", 00:25:59.317 "uuid": "f52ae366-6bfe-45c1-898d-80745e580059", 00:25:59.317 "is_configured": true, 00:25:59.317 "data_offset": 0, 00:25:59.317 "data_size": 65536 00:25:59.317 } 00:25:59.317 ] 00:25:59.317 }' 00:25:59.317 12:10:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:59.317 12:10:04 -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 12:10:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:00.140 [2024-11-29 12:10:05.541197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:00.140 12:10:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:00.140 12:10:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:00.140 12:10:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:00.140 12:10:05 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.141 12:10:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.397 12:10:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.397 "name": "Existed_Raid", 00:26:00.397 "uuid": "66aa88ab-8124-49ca-ab38-3a9bb8f3186c", 00:26:00.397 "strip_size_kb": 64, 00:26:00.397 "state": "online", 00:26:00.397 "raid_level": "raid5f", 00:26:00.397 "superblock": false, 00:26:00.397 "num_base_bdevs": 3, 00:26:00.397 "num_base_bdevs_discovered": 2, 00:26:00.397 "num_base_bdevs_operational": 2, 00:26:00.397 "base_bdevs_list": [ 00:26:00.397 { 00:26:00.397 "name": null, 00:26:00.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.397 "is_configured": false, 00:26:00.397 "data_offset": 0, 00:26:00.397 "data_size": 65536 00:26:00.397 }, 00:26:00.397 { 00:26:00.397 "name": "BaseBdev2", 00:26:00.397 "uuid": "a2f19068-a584-417d-8e30-71cabb10688f", 00:26:00.397 "is_configured": true, 00:26:00.397 "data_offset": 0, 00:26:00.397 "data_size": 65536 00:26:00.397 }, 00:26:00.397 { 00:26:00.397 "name": "BaseBdev3", 00:26:00.397 "uuid": "f52ae366-6bfe-45c1-898d-80745e580059", 00:26:00.397 "is_configured": true, 00:26:00.397 "data_offset": 0, 00:26:00.397 "data_size": 65536 00:26:00.397 } 00:26:00.397 ] 00:26:00.397 }' 00:26:00.397 12:10:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.397 12:10:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:01.331 12:10:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:01.589 [2024-11-29 12:10:06.948440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:01.589 [2024-11-29 12:10:06.948491] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:01.589 [2024-11-29 12:10:06.948564] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.589 12:10:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:01.589 12:10:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:01.589 12:10:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:01.589 12:10:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.848 12:10:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:01.848 12:10:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:01.848 12:10:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:02.106 [2024-11-29 12:10:07.426780] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:02.106 [2024-11-29 12:10:07.426888] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:26:02.106 12:10:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:02.106 12:10:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:02.106 12:10:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.106 12:10:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:02.364 12:10:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:02.364 12:10:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:02.364 12:10:07 -- bdev/bdev_raid.sh@287 -- # killprocess 138370 00:26:02.364 12:10:07 -- common/autotest_common.sh@936 -- # '[' -z 138370 ']' 00:26:02.364 12:10:07 -- common/autotest_common.sh@940 -- # kill -0 138370 00:26:02.364 12:10:07 -- common/autotest_common.sh@941 -- # uname 00:26:02.364 12:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:02.364 12:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138370 00:26:02.364 12:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:02.364 12:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:02.364 12:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138370' 00:26:02.364 killing process with pid 138370 00:26:02.364 12:10:07 -- common/autotest_common.sh@955 -- # kill 138370 00:26:02.364 [2024-11-29 12:10:07.703515] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:02.364 [2024-11-29 12:10:07.703637] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:02.364 12:10:07 -- common/autotest_common.sh@960 -- # wait 138370 00:26:02.622 12:10:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:02.622 00:26:02.622 real 0m11.644s 00:26:02.622 user 0m21.327s 00:26:02.622 sys 0m1.536s 00:26:02.622 12:10:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:02.622 12:10:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.622 ************************************ 00:26:02.622 END TEST raid5f_state_function_test 00:26:02.622 ************************************ 00:26:02.622 12:10:07 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:26:02.622 12:10:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:02.622 12:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:02.622 12:10:08 -- common/autotest_common.sh@10 -- # set +x 00:26:02.622 ************************************ 00:26:02.622 START TEST raid5f_state_function_test_sb 00:26:02.622 ************************************ 00:26:02.622 12:10:08 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=138747 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138747' 00:26:02.622 Process raid pid: 138747 00:26:02.622 12:10:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138747 /var/tmp/spdk-raid.sock 00:26:02.622 12:10:08 -- common/autotest_common.sh@829 -- # '[' -z 138747 ']' 00:26:02.622 12:10:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:02.622 12:10:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.622 12:10:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:02.622 12:10:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.622 12:10:08 -- common/autotest_common.sh@10 -- # set +x 00:26:02.622 [2024-11-29 12:10:08.082042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:02.622 [2024-11-29 12:10:08.082628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.881 [2024-11-29 12:10:08.224858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.881 [2024-11-29 12:10:08.322977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.881 [2024-11-29 12:10:08.379252] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:03.812 12:10:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.812 12:10:09 -- common/autotest_common.sh@862 -- # return 0 00:26:03.812 12:10:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:03.812 [2024-11-29 12:10:09.297940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:03.812 [2024-11-29 12:10:09.298386] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:03.812 [2024-11-29 12:10:09.298524] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:03.812 [2024-11-29 12:10:09.298603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:03.812 [2024-11-29 12:10:09.298733] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:03.812 [2024-11-29 12:10:09.298834] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.813 12:10:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.071 12:10:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:04.071 "name": "Existed_Raid", 00:26:04.071 "uuid": "2bfe78ac-1a32-4b09-b1ca-e1a1df5f147f", 00:26:04.071 "strip_size_kb": 64, 00:26:04.071 "state": "configuring", 00:26:04.071 "raid_level": "raid5f", 00:26:04.071 "superblock": true, 00:26:04.071 "num_base_bdevs": 3, 00:26:04.071 "num_base_bdevs_discovered": 0, 00:26:04.071 "num_base_bdevs_operational": 3, 00:26:04.071 "base_bdevs_list": [ 00:26:04.071 { 00:26:04.071 "name": "BaseBdev1", 00:26:04.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.071 "is_configured": false, 00:26:04.071 "data_offset": 0, 00:26:04.071 "data_size": 0 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "name": "BaseBdev2", 00:26:04.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.071 "is_configured": false, 00:26:04.071 "data_offset": 0, 00:26:04.071 "data_size": 0 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "name": "BaseBdev3", 00:26:04.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.071 "is_configured": false, 00:26:04.071 "data_offset": 0, 00:26:04.071 "data_size": 0 00:26:04.071 } 00:26:04.071 ] 00:26:04.071 }' 00:26:04.071 12:10:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:04.071 12:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:05.004 12:10:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:05.004 [2024-11-29 12:10:10.381968] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:05.004 [2024-11-29 12:10:10.382339] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:26:05.004 12:10:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:05.263 [2024-11-29 12:10:10.622102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.263 [2024-11-29 12:10:10.622517] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.263 [2024-11-29 12:10:10.622644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.263 [2024-11-29 12:10:10.622715] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.263 [2024-11-29 12:10:10.622825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:05.263 [2024-11-29 12:10:10.622905] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:05.263 12:10:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:05.520 [2024-11-29 12:10:10.881934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.521 BaseBdev1 00:26:05.521 12:10:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:05.521 12:10:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:05.521 12:10:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:05.521 12:10:10 -- common/autotest_common.sh@899 -- # local i 00:26:05.521 12:10:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:05.521 12:10:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:05.521 12:10:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:05.778 12:10:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:06.036 [ 00:26:06.036 { 00:26:06.036 "name": "BaseBdev1", 00:26:06.036 "aliases": [ 00:26:06.036 "40be6525-eb0c-483c-b234-ed85f4c8487e" 00:26:06.036 ], 00:26:06.036 "product_name": "Malloc disk", 00:26:06.036 "block_size": 512, 00:26:06.036 "num_blocks": 65536, 00:26:06.036 "uuid": "40be6525-eb0c-483c-b234-ed85f4c8487e", 00:26:06.036 "assigned_rate_limits": { 00:26:06.036 "rw_ios_per_sec": 0, 00:26:06.036 "rw_mbytes_per_sec": 0, 00:26:06.036 "r_mbytes_per_sec": 0, 00:26:06.036 "w_mbytes_per_sec": 0 00:26:06.036 }, 00:26:06.036 "claimed": true, 00:26:06.036 "claim_type": "exclusive_write", 00:26:06.036 "zoned": false, 00:26:06.036 "supported_io_types": { 00:26:06.036 "read": true, 00:26:06.036 "write": true, 00:26:06.036 "unmap": true, 00:26:06.036 "write_zeroes": true, 00:26:06.036 "flush": true, 00:26:06.036 "reset": true, 00:26:06.036 "compare": false, 00:26:06.036 "compare_and_write": false, 00:26:06.036 "abort": true, 00:26:06.036 "nvme_admin": false, 00:26:06.036 "nvme_io": false 00:26:06.036 }, 00:26:06.036 "memory_domains": [ 00:26:06.036 { 00:26:06.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.036 "dma_device_type": 2 00:26:06.036 } 00:26:06.036 ], 00:26:06.036 "driver_specific": {} 00:26:06.036 } 00:26:06.036 ] 00:26:06.036 12:10:11 -- common/autotest_common.sh@905 -- # return 0 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.036 12:10:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.294 12:10:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:06.294 "name": "Existed_Raid", 00:26:06.294 "uuid": "5750ac47-6dd0-4e2b-b68f-7e7c11b80c10", 00:26:06.294 "strip_size_kb": 64, 00:26:06.294 "state": "configuring", 00:26:06.294 "raid_level": "raid5f", 00:26:06.294 "superblock": true, 00:26:06.294 "num_base_bdevs": 3, 00:26:06.294 "num_base_bdevs_discovered": 1, 00:26:06.294 "num_base_bdevs_operational": 3, 00:26:06.294 "base_bdevs_list": [ 00:26:06.294 { 00:26:06.294 "name": "BaseBdev1", 00:26:06.294 "uuid": "40be6525-eb0c-483c-b234-ed85f4c8487e", 00:26:06.294 "is_configured": true, 00:26:06.294 "data_offset": 2048, 00:26:06.294 "data_size": 63488 00:26:06.294 }, 00:26:06.294 { 00:26:06.294 "name": "BaseBdev2", 00:26:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.294 "is_configured": false, 00:26:06.294 "data_offset": 0, 00:26:06.294 "data_size": 0 00:26:06.294 }, 00:26:06.294 { 00:26:06.294 "name": "BaseBdev3", 00:26:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.294 "is_configured": false, 00:26:06.294 "data_offset": 0, 00:26:06.294 "data_size": 0 00:26:06.294 } 00:26:06.294 ] 00:26:06.294 }' 00:26:06.294 12:10:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:06.294 12:10:11 -- common/autotest_common.sh@10 -- # set +x 00:26:06.860 12:10:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:07.175 [2024-11-29 12:10:12.574387] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:07.175 [2024-11-29 12:10:12.574659] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:26:07.175 12:10:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:26:07.175 12:10:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:07.433 12:10:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:07.692 BaseBdev1 00:26:07.692 12:10:13 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:26:07.692 12:10:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:07.692 12:10:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:07.692 12:10:13 -- common/autotest_common.sh@899 -- # local i 00:26:07.692 12:10:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:07.692 12:10:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:07.692 12:10:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:07.950 12:10:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:08.209 [ 00:26:08.209 { 00:26:08.209 "name": "BaseBdev1", 00:26:08.209 "aliases": [ 00:26:08.209 "d22ed50b-baca-48ed-873e-c508ccc301c4" 00:26:08.209 ], 00:26:08.209 "product_name": "Malloc disk", 00:26:08.209 "block_size": 512, 00:26:08.209 "num_blocks": 65536, 00:26:08.209 "uuid": "d22ed50b-baca-48ed-873e-c508ccc301c4", 00:26:08.209 "assigned_rate_limits": { 00:26:08.209 "rw_ios_per_sec": 0, 00:26:08.209 "rw_mbytes_per_sec": 0, 00:26:08.209 "r_mbytes_per_sec": 0, 00:26:08.209 "w_mbytes_per_sec": 0 00:26:08.209 }, 00:26:08.209 "claimed": false, 00:26:08.209 "zoned": false, 00:26:08.209 "supported_io_types": { 00:26:08.209 "read": true, 00:26:08.209 "write": true, 00:26:08.209 "unmap": true, 00:26:08.209 "write_zeroes": true, 00:26:08.209 "flush": true, 00:26:08.209 "reset": true, 00:26:08.209 "compare": false, 00:26:08.209 "compare_and_write": false, 00:26:08.209 "abort": true, 00:26:08.209 "nvme_admin": false, 00:26:08.209 "nvme_io": false 00:26:08.209 }, 00:26:08.209 "memory_domains": [ 00:26:08.209 { 00:26:08.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.209 "dma_device_type": 2 00:26:08.209 } 00:26:08.209 ], 00:26:08.209 "driver_specific": {} 00:26:08.209 } 00:26:08.209 ] 00:26:08.209 12:10:13 -- common/autotest_common.sh@905 -- # return 0 00:26:08.209 12:10:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:08.468 [2024-11-29 12:10:13.812618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.468 [2024-11-29 12:10:13.815257] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:08.468 [2024-11-29 12:10:13.815510] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:08.468 [2024-11-29 12:10:13.815683] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:08.468 [2024-11-29 12:10:13.815762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.468 12:10:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.726 12:10:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:08.726 "name": "Existed_Raid", 00:26:08.726 "uuid": "35f01b2b-21ed-4c43-a65d-d608d084c355", 00:26:08.726 "strip_size_kb": 64, 00:26:08.726 "state": "configuring", 00:26:08.726 "raid_level": "raid5f", 00:26:08.726 "superblock": true, 00:26:08.726 "num_base_bdevs": 3, 00:26:08.726 "num_base_bdevs_discovered": 1, 00:26:08.726 "num_base_bdevs_operational": 3, 00:26:08.726 "base_bdevs_list": [ 00:26:08.726 { 00:26:08.726 "name": "BaseBdev1", 00:26:08.726 "uuid": "d22ed50b-baca-48ed-873e-c508ccc301c4", 00:26:08.726 "is_configured": true, 00:26:08.726 "data_offset": 2048, 00:26:08.726 "data_size": 63488 00:26:08.726 }, 00:26:08.726 { 00:26:08.726 "name": "BaseBdev2", 00:26:08.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.726 "is_configured": false, 00:26:08.726 "data_offset": 0, 00:26:08.726 "data_size": 0 00:26:08.726 }, 00:26:08.726 { 00:26:08.726 "name": "BaseBdev3", 00:26:08.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.726 "is_configured": false, 00:26:08.726 "data_offset": 0, 00:26:08.726 "data_size": 0 00:26:08.726 } 00:26:08.726 ] 00:26:08.726 }' 00:26:08.726 12:10:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:08.726 12:10:14 -- common/autotest_common.sh@10 -- # set +x 00:26:09.291 12:10:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:09.550 [2024-11-29 12:10:14.934533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:09.550 BaseBdev2 00:26:09.550 12:10:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:09.550 12:10:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:09.550 12:10:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:09.550 12:10:14 -- common/autotest_common.sh@899 -- # local i 00:26:09.550 12:10:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:09.550 12:10:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:09.550 12:10:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:09.812 12:10:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:10.071 [ 00:26:10.071 { 00:26:10.071 "name": "BaseBdev2", 00:26:10.071 "aliases": [ 00:26:10.071 "34253553-a2f3-43ea-b174-547fcb4d2ed0" 00:26:10.071 ], 00:26:10.071 "product_name": "Malloc disk", 00:26:10.071 "block_size": 512, 00:26:10.071 "num_blocks": 65536, 00:26:10.071 "uuid": "34253553-a2f3-43ea-b174-547fcb4d2ed0", 00:26:10.071 "assigned_rate_limits": { 00:26:10.071 "rw_ios_per_sec": 0, 00:26:10.071 "rw_mbytes_per_sec": 0, 00:26:10.071 "r_mbytes_per_sec": 0, 00:26:10.071 "w_mbytes_per_sec": 0 00:26:10.071 }, 00:26:10.071 "claimed": true, 00:26:10.071 "claim_type": "exclusive_write", 00:26:10.071 "zoned": false, 00:26:10.071 "supported_io_types": { 00:26:10.071 "read": true, 00:26:10.071 "write": true, 00:26:10.071 "unmap": true, 00:26:10.071 "write_zeroes": true, 00:26:10.071 "flush": true, 00:26:10.071 "reset": true, 00:26:10.071 "compare": false, 00:26:10.071 "compare_and_write": false, 00:26:10.071 "abort": true, 00:26:10.071 "nvme_admin": false, 00:26:10.071 "nvme_io": false 00:26:10.071 }, 00:26:10.071 "memory_domains": [ 00:26:10.071 { 00:26:10.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.071 "dma_device_type": 2 00:26:10.071 } 00:26:10.071 ], 00:26:10.071 "driver_specific": {} 00:26:10.071 } 00:26:10.071 ] 00:26:10.071 12:10:15 -- common/autotest_common.sh@905 -- # return 0 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.071 12:10:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.329 12:10:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.329 "name": "Existed_Raid", 00:26:10.329 "uuid": "35f01b2b-21ed-4c43-a65d-d608d084c355", 00:26:10.329 "strip_size_kb": 64, 00:26:10.329 "state": "configuring", 00:26:10.329 "raid_level": "raid5f", 00:26:10.329 "superblock": true, 00:26:10.329 "num_base_bdevs": 3, 00:26:10.329 "num_base_bdevs_discovered": 2, 00:26:10.329 "num_base_bdevs_operational": 3, 00:26:10.329 "base_bdevs_list": [ 00:26:10.329 { 00:26:10.329 "name": "BaseBdev1", 00:26:10.329 "uuid": "d22ed50b-baca-48ed-873e-c508ccc301c4", 00:26:10.329 "is_configured": true, 00:26:10.329 "data_offset": 2048, 00:26:10.329 "data_size": 63488 00:26:10.329 }, 00:26:10.329 { 00:26:10.329 "name": "BaseBdev2", 00:26:10.329 "uuid": "34253553-a2f3-43ea-b174-547fcb4d2ed0", 00:26:10.329 "is_configured": true, 00:26:10.329 "data_offset": 2048, 00:26:10.329 "data_size": 63488 00:26:10.329 }, 00:26:10.329 { 00:26:10.329 "name": "BaseBdev3", 00:26:10.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.329 "is_configured": false, 00:26:10.329 "data_offset": 0, 00:26:10.329 "data_size": 0 00:26:10.329 } 00:26:10.329 ] 00:26:10.329 }' 00:26:10.329 12:10:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.329 12:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:10.895 12:10:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:11.154 [2024-11-29 12:10:16.640186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:11.154 [2024-11-29 12:10:16.640799] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:26:11.154 [2024-11-29 12:10:16.640942] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:11.154 [2024-11-29 12:10:16.641129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:26:11.154 BaseBdev3 00:26:11.154 [2024-11-29 12:10:16.642008] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:26:11.154 [2024-11-29 12:10:16.642167] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:26:11.154 [2024-11-29 12:10:16.642411] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.154 12:10:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:11.154 12:10:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:11.154 12:10:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:11.154 12:10:16 -- common/autotest_common.sh@899 -- # local i 00:26:11.154 12:10:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:11.154 12:10:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:11.154 12:10:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.721 12:10:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:11.721 [ 00:26:11.721 { 00:26:11.721 "name": "BaseBdev3", 00:26:11.721 "aliases": [ 00:26:11.721 "acd246dc-f16e-405e-b91e-5f686236acc7" 00:26:11.721 ], 00:26:11.721 "product_name": "Malloc disk", 00:26:11.721 "block_size": 512, 00:26:11.721 "num_blocks": 65536, 00:26:11.721 "uuid": "acd246dc-f16e-405e-b91e-5f686236acc7", 00:26:11.721 "assigned_rate_limits": { 00:26:11.721 "rw_ios_per_sec": 0, 00:26:11.721 "rw_mbytes_per_sec": 0, 00:26:11.721 "r_mbytes_per_sec": 0, 00:26:11.721 "w_mbytes_per_sec": 0 00:26:11.721 }, 00:26:11.721 "claimed": true, 00:26:11.721 "claim_type": "exclusive_write", 00:26:11.721 "zoned": false, 00:26:11.721 "supported_io_types": { 00:26:11.721 "read": true, 00:26:11.721 "write": true, 00:26:11.721 "unmap": true, 00:26:11.721 "write_zeroes": true, 00:26:11.721 "flush": true, 00:26:11.721 "reset": true, 00:26:11.721 "compare": false, 00:26:11.721 "compare_and_write": false, 00:26:11.721 "abort": true, 00:26:11.721 "nvme_admin": false, 00:26:11.721 "nvme_io": false 00:26:11.721 }, 00:26:11.721 "memory_domains": [ 00:26:11.721 { 00:26:11.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.721 "dma_device_type": 2 00:26:11.721 } 00:26:11.721 ], 00:26:11.721 "driver_specific": {} 00:26:11.721 } 00:26:11.721 ] 00:26:11.721 12:10:17 -- common/autotest_common.sh@905 -- # return 0 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.721 12:10:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.980 12:10:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:11.980 "name": "Existed_Raid", 00:26:11.980 "uuid": "35f01b2b-21ed-4c43-a65d-d608d084c355", 00:26:11.980 "strip_size_kb": 64, 00:26:11.980 "state": "online", 00:26:11.980 "raid_level": "raid5f", 00:26:11.980 "superblock": true, 00:26:11.980 "num_base_bdevs": 3, 00:26:11.980 "num_base_bdevs_discovered": 3, 00:26:11.980 "num_base_bdevs_operational": 3, 00:26:11.980 "base_bdevs_list": [ 00:26:11.980 { 00:26:11.980 "name": "BaseBdev1", 00:26:11.980 "uuid": "d22ed50b-baca-48ed-873e-c508ccc301c4", 00:26:11.980 "is_configured": true, 00:26:11.980 "data_offset": 2048, 00:26:11.980 "data_size": 63488 00:26:11.980 }, 00:26:11.980 { 00:26:11.980 "name": "BaseBdev2", 00:26:11.980 "uuid": "34253553-a2f3-43ea-b174-547fcb4d2ed0", 00:26:11.980 "is_configured": true, 00:26:11.980 "data_offset": 2048, 00:26:11.980 "data_size": 63488 00:26:11.980 }, 00:26:11.980 { 00:26:11.980 "name": "BaseBdev3", 00:26:11.980 "uuid": "acd246dc-f16e-405e-b91e-5f686236acc7", 00:26:11.980 "is_configured": true, 00:26:11.980 "data_offset": 2048, 00:26:11.980 "data_size": 63488 00:26:11.980 } 00:26:11.980 ] 00:26:11.980 }' 00:26:11.980 12:10:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:11.980 12:10:17 -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:12.914 [2024-11-29 12:10:18.360744] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.914 12:10:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.172 12:10:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.172 "name": "Existed_Raid", 00:26:13.172 "uuid": "35f01b2b-21ed-4c43-a65d-d608d084c355", 00:26:13.172 "strip_size_kb": 64, 00:26:13.172 "state": "online", 00:26:13.172 "raid_level": "raid5f", 00:26:13.172 "superblock": true, 00:26:13.172 "num_base_bdevs": 3, 00:26:13.172 "num_base_bdevs_discovered": 2, 00:26:13.172 "num_base_bdevs_operational": 2, 00:26:13.172 "base_bdevs_list": [ 00:26:13.172 { 00:26:13.172 "name": null, 00:26:13.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.172 "is_configured": false, 00:26:13.172 "data_offset": 2048, 00:26:13.172 "data_size": 63488 00:26:13.172 }, 00:26:13.172 { 00:26:13.172 "name": "BaseBdev2", 00:26:13.172 "uuid": "34253553-a2f3-43ea-b174-547fcb4d2ed0", 00:26:13.172 "is_configured": true, 00:26:13.172 "data_offset": 2048, 00:26:13.172 "data_size": 63488 00:26:13.172 }, 00:26:13.172 { 00:26:13.172 "name": "BaseBdev3", 00:26:13.172 "uuid": "acd246dc-f16e-405e-b91e-5f686236acc7", 00:26:13.173 "is_configured": true, 00:26:13.173 "data_offset": 2048, 00:26:13.173 "data_size": 63488 00:26:13.173 } 00:26:13.173 ] 00:26:13.173 }' 00:26:13.173 12:10:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.173 12:10:18 -- common/autotest_common.sh@10 -- # set +x 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:14.126 12:10:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:14.384 [2024-11-29 12:10:19.774709] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:14.384 [2024-11-29 12:10:19.775088] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:14.384 [2024-11-29 12:10:19.775298] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:14.384 12:10:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:14.384 12:10:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:14.384 12:10:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.384 12:10:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:14.641 12:10:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:14.641 12:10:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:14.641 12:10:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:14.899 [2024-11-29 12:10:20.305711] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:14.899 [2024-11-29 12:10:20.306128] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:26:14.899 12:10:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:14.899 12:10:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:14.899 12:10:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.899 12:10:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:15.157 12:10:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:15.157 12:10:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:15.157 12:10:20 -- bdev/bdev_raid.sh@287 -- # killprocess 138747 00:26:15.157 12:10:20 -- common/autotest_common.sh@936 -- # '[' -z 138747 ']' 00:26:15.157 12:10:20 -- common/autotest_common.sh@940 -- # kill -0 138747 00:26:15.157 12:10:20 -- common/autotest_common.sh@941 -- # uname 00:26:15.157 12:10:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:15.157 12:10:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138747 00:26:15.157 12:10:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:15.157 12:10:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:15.157 12:10:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138747' 00:26:15.157 killing process with pid 138747 00:26:15.157 12:10:20 -- common/autotest_common.sh@955 -- # kill 138747 00:26:15.157 [2024-11-29 12:10:20.644610] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:15.157 12:10:20 -- common/autotest_common.sh@960 -- # wait 138747 00:26:15.157 [2024-11-29 12:10:20.644876] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:15.414 12:10:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:15.415 00:26:15.415 real 0m12.877s 00:26:15.415 user 0m23.548s 00:26:15.415 sys 0m1.753s 00:26:15.415 12:10:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:15.415 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.415 ************************************ 00:26:15.415 END TEST raid5f_state_function_test_sb 00:26:15.415 ************************************ 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:26:15.672 12:10:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:15.672 12:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:15.672 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 ************************************ 00:26:15.672 START TEST raid5f_superblock_test 00:26:15.672 ************************************ 00:26:15.672 12:10:20 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:26:15.672 12:10:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=139134 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 139134 /var/tmp/spdk-raid.sock 00:26:15.673 12:10:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:15.673 12:10:20 -- common/autotest_common.sh@829 -- # '[' -z 139134 ']' 00:26:15.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:15.673 12:10:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:15.673 12:10:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.673 12:10:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:15.673 12:10:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.673 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.673 [2024-11-29 12:10:21.010643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:15.673 [2024-11-29 12:10:21.011258] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139134 ] 00:26:15.673 [2024-11-29 12:10:21.153519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.931 [2024-11-29 12:10:21.251019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.931 [2024-11-29 12:10:21.306461] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:16.864 12:10:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.864 12:10:22 -- common/autotest_common.sh@862 -- # return 0 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:16.864 malloc1 00:26:16.864 12:10:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:17.122 [2024-11-29 12:10:22.536167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:17.122 [2024-11-29 12:10:22.536611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.122 [2024-11-29 12:10:22.536802] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:26:17.122 [2024-11-29 12:10:22.537029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.123 [2024-11-29 12:10:22.539996] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.123 [2024-11-29 12:10:22.540194] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:17.123 pt1 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:17.123 12:10:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:17.380 malloc2 00:26:17.380 12:10:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:17.638 [2024-11-29 12:10:23.028153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:17.638 [2024-11-29 12:10:23.028575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.638 [2024-11-29 12:10:23.028680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:17.638 [2024-11-29 12:10:23.028940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.638 [2024-11-29 12:10:23.031717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.638 [2024-11-29 12:10:23.031905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:17.638 pt2 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:17.638 12:10:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:17.896 malloc3 00:26:17.896 12:10:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:18.153 [2024-11-29 12:10:23.575576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:18.153 [2024-11-29 12:10:23.576006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:18.153 [2024-11-29 12:10:23.576127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:18.153 [2024-11-29 12:10:23.576324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:18.153 [2024-11-29 12:10:23.579026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:18.153 [2024-11-29 12:10:23.579219] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:18.153 pt3 00:26:18.153 12:10:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:18.153 12:10:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:18.153 12:10:23 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:26:18.411 [2024-11-29 12:10:23.811783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:18.411 [2024-11-29 12:10:23.814450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:18.411 [2024-11-29 12:10:23.814698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:18.411 [2024-11-29 12:10:23.815023] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:26:18.411 [2024-11-29 12:10:23.815156] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:18.411 [2024-11-29 12:10:23.815369] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:26:18.411 [2024-11-29 12:10:23.816322] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:26:18.411 [2024-11-29 12:10:23.816461] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:26:18.411 [2024-11-29 12:10:23.816791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.411 12:10:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.669 12:10:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:18.669 "name": "raid_bdev1", 00:26:18.669 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:18.669 "strip_size_kb": 64, 00:26:18.669 "state": "online", 00:26:18.669 "raid_level": "raid5f", 00:26:18.669 "superblock": true, 00:26:18.669 "num_base_bdevs": 3, 00:26:18.669 "num_base_bdevs_discovered": 3, 00:26:18.669 "num_base_bdevs_operational": 3, 00:26:18.669 "base_bdevs_list": [ 00:26:18.669 { 00:26:18.669 "name": "pt1", 00:26:18.669 "uuid": "dc11e870-2e25-59ea-ac54-309584d1a02b", 00:26:18.669 "is_configured": true, 00:26:18.669 "data_offset": 2048, 00:26:18.669 "data_size": 63488 00:26:18.669 }, 00:26:18.669 { 00:26:18.669 "name": "pt2", 00:26:18.669 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:18.669 "is_configured": true, 00:26:18.669 "data_offset": 2048, 00:26:18.669 "data_size": 63488 00:26:18.669 }, 00:26:18.669 { 00:26:18.669 "name": "pt3", 00:26:18.669 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:18.669 "is_configured": true, 00:26:18.669 "data_offset": 2048, 00:26:18.669 "data_size": 63488 00:26:18.669 } 00:26:18.669 ] 00:26:18.669 }' 00:26:18.669 12:10:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:18.669 12:10:24 -- common/autotest_common.sh@10 -- # set +x 00:26:19.235 12:10:24 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:19.235 12:10:24 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:19.493 [2024-11-29 12:10:24.937229] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:19.493 12:10:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a3f72cf7-1ca7-4e85-a862-c416d425f27d 00:26:19.493 12:10:24 -- bdev/bdev_raid.sh@380 -- # '[' -z a3f72cf7-1ca7-4e85-a862-c416d425f27d ']' 00:26:19.493 12:10:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:19.750 [2024-11-29 12:10:25.169057] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.751 [2024-11-29 12:10:25.169395] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.751 [2024-11-29 12:10:25.169640] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.751 [2024-11-29 12:10:25.169878] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:19.751 [2024-11-29 12:10:25.170009] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:26:19.751 12:10:25 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:19.751 12:10:25 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.007 12:10:25 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:20.007 12:10:25 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:20.007 12:10:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.007 12:10:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:20.265 12:10:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.265 12:10:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:20.522 12:10:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.522 12:10:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:20.779 12:10:26 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:20.779 12:10:26 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:21.037 12:10:26 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:21.037 12:10:26 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:21.037 12:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:26:21.037 12:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:21.037 12:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.037 12:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.037 12:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.037 12:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.037 12:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.037 12:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.037 12:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.037 12:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:21.037 12:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:21.305 [2024-11-29 12:10:26.677370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:21.306 [2024-11-29 12:10:26.679953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:21.306 [2024-11-29 12:10:26.680153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:21.306 [2024-11-29 12:10:26.680260] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:21.306 [2024-11-29 12:10:26.680597] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:21.306 [2024-11-29 12:10:26.680776] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:26:21.306 [2024-11-29 12:10:26.680951] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:21.306 [2024-11-29 12:10:26.681000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:26:21.306 request: 00:26:21.306 { 00:26:21.306 "name": "raid_bdev1", 00:26:21.306 "raid_level": "raid5f", 00:26:21.306 "base_bdevs": [ 00:26:21.306 "malloc1", 00:26:21.306 "malloc2", 00:26:21.306 "malloc3" 00:26:21.306 ], 00:26:21.306 "superblock": false, 00:26:21.306 "strip_size_kb": 64, 00:26:21.306 "method": "bdev_raid_create", 00:26:21.306 "req_id": 1 00:26:21.306 } 00:26:21.306 Got JSON-RPC error response 00:26:21.306 response: 00:26:21.306 { 00:26:21.306 "code": -17, 00:26:21.306 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:21.306 } 00:26:21.306 12:10:26 -- common/autotest_common.sh@653 -- # es=1 00:26:21.306 12:10:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:21.306 12:10:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:21.306 12:10:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:21.306 12:10:26 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.306 12:10:26 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:21.563 12:10:26 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:21.563 12:10:26 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:21.563 12:10:26 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:21.819 [2024-11-29 12:10:27.205509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:21.819 [2024-11-29 12:10:27.205913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.819 [2024-11-29 12:10:27.206007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:21.819 [2024-11-29 12:10:27.206330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.819 [2024-11-29 12:10:27.208955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.819 [2024-11-29 12:10:27.209134] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:21.819 [2024-11-29 12:10:27.209396] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:21.819 [2024-11-29 12:10:27.209603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:21.819 pt1 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.819 12:10:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.076 12:10:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:22.076 "name": "raid_bdev1", 00:26:22.076 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:22.076 "strip_size_kb": 64, 00:26:22.076 "state": "configuring", 00:26:22.076 "raid_level": "raid5f", 00:26:22.076 "superblock": true, 00:26:22.076 "num_base_bdevs": 3, 00:26:22.076 "num_base_bdevs_discovered": 1, 00:26:22.076 "num_base_bdevs_operational": 3, 00:26:22.076 "base_bdevs_list": [ 00:26:22.076 { 00:26:22.076 "name": "pt1", 00:26:22.076 "uuid": "dc11e870-2e25-59ea-ac54-309584d1a02b", 00:26:22.076 "is_configured": true, 00:26:22.076 "data_offset": 2048, 00:26:22.076 "data_size": 63488 00:26:22.076 }, 00:26:22.076 { 00:26:22.076 "name": null, 00:26:22.076 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:22.076 "is_configured": false, 00:26:22.076 "data_offset": 2048, 00:26:22.076 "data_size": 63488 00:26:22.076 }, 00:26:22.076 { 00:26:22.076 "name": null, 00:26:22.076 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:22.076 "is_configured": false, 00:26:22.076 "data_offset": 2048, 00:26:22.076 "data_size": 63488 00:26:22.076 } 00:26:22.076 ] 00:26:22.076 }' 00:26:22.076 12:10:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:22.076 12:10:27 -- common/autotest_common.sh@10 -- # set +x 00:26:22.639 12:10:28 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:26:22.639 12:10:28 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:22.897 [2024-11-29 12:10:28.333788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:22.897 [2024-11-29 12:10:28.334203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.897 [2024-11-29 12:10:28.334404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:22.897 [2024-11-29 12:10:28.334576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.897 [2024-11-29 12:10:28.335164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.897 [2024-11-29 12:10:28.335329] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:22.897 [2024-11-29 12:10:28.335549] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:22.897 [2024-11-29 12:10:28.335694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:22.897 pt2 00:26:22.897 12:10:28 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:23.155 [2024-11-29 12:10:28.597879] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.155 12:10:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.414 12:10:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:23.414 "name": "raid_bdev1", 00:26:23.414 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:23.414 "strip_size_kb": 64, 00:26:23.414 "state": "configuring", 00:26:23.414 "raid_level": "raid5f", 00:26:23.414 "superblock": true, 00:26:23.414 "num_base_bdevs": 3, 00:26:23.414 "num_base_bdevs_discovered": 1, 00:26:23.414 "num_base_bdevs_operational": 3, 00:26:23.414 "base_bdevs_list": [ 00:26:23.414 { 00:26:23.414 "name": "pt1", 00:26:23.414 "uuid": "dc11e870-2e25-59ea-ac54-309584d1a02b", 00:26:23.414 "is_configured": true, 00:26:23.414 "data_offset": 2048, 00:26:23.414 "data_size": 63488 00:26:23.414 }, 00:26:23.414 { 00:26:23.414 "name": null, 00:26:23.414 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:23.414 "is_configured": false, 00:26:23.414 "data_offset": 2048, 00:26:23.414 "data_size": 63488 00:26:23.414 }, 00:26:23.414 { 00:26:23.414 "name": null, 00:26:23.414 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:23.414 "is_configured": false, 00:26:23.414 "data_offset": 2048, 00:26:23.414 "data_size": 63488 00:26:23.414 } 00:26:23.414 ] 00:26:23.414 }' 00:26:23.414 12:10:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:23.414 12:10:28 -- common/autotest_common.sh@10 -- # set +x 00:26:24.348 12:10:29 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:24.348 12:10:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:24.348 12:10:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:24.348 [2024-11-29 12:10:29.714088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:24.348 [2024-11-29 12:10:29.714492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.348 [2024-11-29 12:10:29.714668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:24.348 [2024-11-29 12:10:29.714808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.348 [2024-11-29 12:10:29.715408] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.348 [2024-11-29 12:10:29.715573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:24.348 [2024-11-29 12:10:29.715795] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:24.348 [2024-11-29 12:10:29.715940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:24.348 pt2 00:26:24.348 12:10:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:24.348 12:10:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:24.348 12:10:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:24.606 [2024-11-29 12:10:29.950199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:24.606 [2024-11-29 12:10:29.950622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.606 [2024-11-29 12:10:29.950795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:24.606 [2024-11-29 12:10:29.950931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.606 [2024-11-29 12:10:29.951576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.606 [2024-11-29 12:10:29.951749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:24.606 [2024-11-29 12:10:29.952026] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:24.606 [2024-11-29 12:10:29.952185] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:24.606 [2024-11-29 12:10:29.952465] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:26:24.606 [2024-11-29 12:10:29.952593] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:24.606 [2024-11-29 12:10:29.952710] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:26:24.606 [2024-11-29 12:10:29.953431] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:26:24.606 [2024-11-29 12:10:29.953569] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:26:24.606 [2024-11-29 12:10:29.953794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.606 pt3 00:26:24.606 12:10:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:24.606 12:10:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:24.606 12:10:29 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:24.606 12:10:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:24.606 12:10:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.607 12:10:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.865 12:10:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:24.865 "name": "raid_bdev1", 00:26:24.865 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:24.865 "strip_size_kb": 64, 00:26:24.865 "state": "online", 00:26:24.865 "raid_level": "raid5f", 00:26:24.865 "superblock": true, 00:26:24.865 "num_base_bdevs": 3, 00:26:24.865 "num_base_bdevs_discovered": 3, 00:26:24.865 "num_base_bdevs_operational": 3, 00:26:24.865 "base_bdevs_list": [ 00:26:24.865 { 00:26:24.865 "name": "pt1", 00:26:24.865 "uuid": "dc11e870-2e25-59ea-ac54-309584d1a02b", 00:26:24.865 "is_configured": true, 00:26:24.865 "data_offset": 2048, 00:26:24.865 "data_size": 63488 00:26:24.865 }, 00:26:24.865 { 00:26:24.865 "name": "pt2", 00:26:24.865 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:24.865 "is_configured": true, 00:26:24.865 "data_offset": 2048, 00:26:24.865 "data_size": 63488 00:26:24.865 }, 00:26:24.865 { 00:26:24.865 "name": "pt3", 00:26:24.865 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:24.865 "is_configured": true, 00:26:24.865 "data_offset": 2048, 00:26:24.865 "data_size": 63488 00:26:24.865 } 00:26:24.865 ] 00:26:24.865 }' 00:26:24.865 12:10:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:24.865 12:10:30 -- common/autotest_common.sh@10 -- # set +x 00:26:25.432 12:10:30 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:25.432 12:10:30 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:25.690 [2024-11-29 12:10:31.095887] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:25.690 12:10:31 -- bdev/bdev_raid.sh@430 -- # '[' a3f72cf7-1ca7-4e85-a862-c416d425f27d '!=' a3f72cf7-1ca7-4e85-a862-c416d425f27d ']' 00:26:25.690 12:10:31 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:26:25.690 12:10:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:25.690 12:10:31 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:25.690 12:10:31 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:25.955 [2024-11-29 12:10:31.371842] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.955 12:10:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.213 12:10:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:26.213 "name": "raid_bdev1", 00:26:26.213 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:26.213 "strip_size_kb": 64, 00:26:26.213 "state": "online", 00:26:26.213 "raid_level": "raid5f", 00:26:26.213 "superblock": true, 00:26:26.213 "num_base_bdevs": 3, 00:26:26.213 "num_base_bdevs_discovered": 2, 00:26:26.213 "num_base_bdevs_operational": 2, 00:26:26.213 "base_bdevs_list": [ 00:26:26.213 { 00:26:26.213 "name": null, 00:26:26.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.213 "is_configured": false, 00:26:26.213 "data_offset": 2048, 00:26:26.213 "data_size": 63488 00:26:26.213 }, 00:26:26.213 { 00:26:26.213 "name": "pt2", 00:26:26.213 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:26.213 "is_configured": true, 00:26:26.213 "data_offset": 2048, 00:26:26.213 "data_size": 63488 00:26:26.213 }, 00:26:26.213 { 00:26:26.213 "name": "pt3", 00:26:26.213 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:26.213 "is_configured": true, 00:26:26.213 "data_offset": 2048, 00:26:26.213 "data_size": 63488 00:26:26.213 } 00:26:26.213 ] 00:26:26.213 }' 00:26:26.213 12:10:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:26.213 12:10:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.779 12:10:32 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:27.036 [2024-11-29 12:10:32.492014] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:27.036 [2024-11-29 12:10:32.492345] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:27.036 [2024-11-29 12:10:32.492554] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:27.036 [2024-11-29 12:10:32.492760] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:27.036 [2024-11-29 12:10:32.492889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:26:27.036 12:10:32 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.036 12:10:32 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:26:27.294 12:10:32 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:26:27.294 12:10:32 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:26:27.294 12:10:32 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:26:27.294 12:10:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:27.294 12:10:32 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:27.603 12:10:32 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:27.603 12:10:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:27.603 12:10:32 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:27.861 12:10:33 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:27.861 12:10:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:27.861 12:10:33 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:26:27.861 12:10:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:27.861 12:10:33 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:28.119 [2024-11-29 12:10:33.480171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:28.119 [2024-11-29 12:10:33.480570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.120 [2024-11-29 12:10:33.480661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:28.120 [2024-11-29 12:10:33.480804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.120 [2024-11-29 12:10:33.483516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.120 [2024-11-29 12:10:33.483707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:28.120 [2024-11-29 12:10:33.483971] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:28.120 [2024-11-29 12:10:33.484132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:28.120 pt2 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.120 12:10:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.378 12:10:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.378 "name": "raid_bdev1", 00:26:28.378 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:28.378 "strip_size_kb": 64, 00:26:28.378 "state": "configuring", 00:26:28.378 "raid_level": "raid5f", 00:26:28.378 "superblock": true, 00:26:28.378 "num_base_bdevs": 3, 00:26:28.378 "num_base_bdevs_discovered": 1, 00:26:28.378 "num_base_bdevs_operational": 2, 00:26:28.378 "base_bdevs_list": [ 00:26:28.378 { 00:26:28.378 "name": null, 00:26:28.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.378 "is_configured": false, 00:26:28.378 "data_offset": 2048, 00:26:28.378 "data_size": 63488 00:26:28.378 }, 00:26:28.378 { 00:26:28.378 "name": "pt2", 00:26:28.378 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:28.378 "is_configured": true, 00:26:28.378 "data_offset": 2048, 00:26:28.378 "data_size": 63488 00:26:28.378 }, 00:26:28.378 { 00:26:28.378 "name": null, 00:26:28.378 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:28.378 "is_configured": false, 00:26:28.378 "data_offset": 2048, 00:26:28.378 "data_size": 63488 00:26:28.378 } 00:26:28.378 ] 00:26:28.378 }' 00:26:28.378 12:10:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.378 12:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:28.944 12:10:34 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:28.944 12:10:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:28.944 12:10:34 -- bdev/bdev_raid.sh@462 -- # i=2 00:26:28.944 12:10:34 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:29.202 [2024-11-29 12:10:34.648727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:29.202 [2024-11-29 12:10:34.649128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.202 [2024-11-29 12:10:34.649222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:29.202 [2024-11-29 12:10:34.649506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.202 [2024-11-29 12:10:34.650058] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.202 [2024-11-29 12:10:34.650241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:29.202 [2024-11-29 12:10:34.650490] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:29.202 [2024-11-29 12:10:34.650638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:29.202 [2024-11-29 12:10:34.650813] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:26:29.202 [2024-11-29 12:10:34.650927] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:29.202 [2024-11-29 12:10:34.651056] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:26:29.202 [2024-11-29 12:10:34.651971] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:26:29.202 [2024-11-29 12:10:34.652110] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:26:29.202 [2024-11-29 12:10:34.652462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.202 pt3 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.202 12:10:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.460 12:10:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.460 "name": "raid_bdev1", 00:26:29.460 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:29.460 "strip_size_kb": 64, 00:26:29.460 "state": "online", 00:26:29.460 "raid_level": "raid5f", 00:26:29.460 "superblock": true, 00:26:29.460 "num_base_bdevs": 3, 00:26:29.460 "num_base_bdevs_discovered": 2, 00:26:29.460 "num_base_bdevs_operational": 2, 00:26:29.460 "base_bdevs_list": [ 00:26:29.460 { 00:26:29.460 "name": null, 00:26:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.460 "is_configured": false, 00:26:29.460 "data_offset": 2048, 00:26:29.460 "data_size": 63488 00:26:29.460 }, 00:26:29.460 { 00:26:29.460 "name": "pt2", 00:26:29.460 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:29.460 "is_configured": true, 00:26:29.461 "data_offset": 2048, 00:26:29.461 "data_size": 63488 00:26:29.461 }, 00:26:29.461 { 00:26:29.461 "name": "pt3", 00:26:29.461 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:29.461 "is_configured": true, 00:26:29.461 "data_offset": 2048, 00:26:29.461 "data_size": 63488 00:26:29.461 } 00:26:29.461 ] 00:26:29.461 }' 00:26:29.461 12:10:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.461 12:10:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.394 12:10:35 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:26:30.394 12:10:35 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:30.394 [2024-11-29 12:10:35.865046] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:30.394 [2024-11-29 12:10:35.865362] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:30.394 [2024-11-29 12:10:35.865556] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:30.394 [2024-11-29 12:10:35.865798] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:30.394 [2024-11-29 12:10:35.865922] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:26:30.394 12:10:35 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.394 12:10:35 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:26:30.652 12:10:36 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:26:30.652 12:10:36 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:26:30.652 12:10:36 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:30.910 [2024-11-29 12:10:36.385168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:30.910 [2024-11-29 12:10:36.385584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.910 [2024-11-29 12:10:36.385687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:30.910 [2024-11-29 12:10:36.385830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.910 [2024-11-29 12:10:36.388494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.910 [2024-11-29 12:10:36.388683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:30.910 [2024-11-29 12:10:36.388951] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:30.910 [2024-11-29 12:10:36.389108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:30.910 pt1 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.910 12:10:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.169 12:10:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:31.169 "name": "raid_bdev1", 00:26:31.169 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:31.169 "strip_size_kb": 64, 00:26:31.169 "state": "configuring", 00:26:31.169 "raid_level": "raid5f", 00:26:31.169 "superblock": true, 00:26:31.169 "num_base_bdevs": 3, 00:26:31.169 "num_base_bdevs_discovered": 1, 00:26:31.169 "num_base_bdevs_operational": 3, 00:26:31.169 "base_bdevs_list": [ 00:26:31.169 { 00:26:31.169 "name": "pt1", 00:26:31.169 "uuid": "dc11e870-2e25-59ea-ac54-309584d1a02b", 00:26:31.169 "is_configured": true, 00:26:31.169 "data_offset": 2048, 00:26:31.169 "data_size": 63488 00:26:31.169 }, 00:26:31.169 { 00:26:31.169 "name": null, 00:26:31.169 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:31.169 "is_configured": false, 00:26:31.169 "data_offset": 2048, 00:26:31.169 "data_size": 63488 00:26:31.169 }, 00:26:31.169 { 00:26:31.169 "name": null, 00:26:31.169 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:31.169 "is_configured": false, 00:26:31.169 "data_offset": 2048, 00:26:31.169 "data_size": 63488 00:26:31.169 } 00:26:31.169 ] 00:26:31.169 }' 00:26:31.169 12:10:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:31.169 12:10:36 -- common/autotest_common.sh@10 -- # set +x 00:26:31.737 12:10:37 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:26:31.737 12:10:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:31.737 12:10:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:31.997 12:10:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:31.997 12:10:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:31.997 12:10:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:32.256 12:10:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:32.256 12:10:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:32.256 12:10:37 -- bdev/bdev_raid.sh@489 -- # i=2 00:26:32.256 12:10:37 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:32.514 [2024-11-29 12:10:38.001874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:32.514 [2024-11-29 12:10:38.002310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.514 [2024-11-29 12:10:38.002427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:32.514 [2024-11-29 12:10:38.002581] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.514 [2024-11-29 12:10:38.003110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.514 [2024-11-29 12:10:38.003277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:32.514 [2024-11-29 12:10:38.003500] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:32.514 [2024-11-29 12:10:38.003627] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:32.514 [2024-11-29 12:10:38.003740] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.514 [2024-11-29 12:10:38.003821] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:26:32.514 [2024-11-29 12:10:38.004017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:32.514 pt3 00:26:32.514 12:10:38 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:32.514 12:10:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:32.514 12:10:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:32.514 12:10:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:32.514 12:10:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.773 12:10:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.032 12:10:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.032 "name": "raid_bdev1", 00:26:33.032 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:33.032 "strip_size_kb": 64, 00:26:33.032 "state": "configuring", 00:26:33.032 "raid_level": "raid5f", 00:26:33.032 "superblock": true, 00:26:33.032 "num_base_bdevs": 3, 00:26:33.032 "num_base_bdevs_discovered": 1, 00:26:33.032 "num_base_bdevs_operational": 2, 00:26:33.032 "base_bdevs_list": [ 00:26:33.032 { 00:26:33.032 "name": null, 00:26:33.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.032 "is_configured": false, 00:26:33.032 "data_offset": 2048, 00:26:33.032 "data_size": 63488 00:26:33.032 }, 00:26:33.032 { 00:26:33.032 "name": null, 00:26:33.032 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:33.032 "is_configured": false, 00:26:33.032 "data_offset": 2048, 00:26:33.032 "data_size": 63488 00:26:33.032 }, 00:26:33.032 { 00:26:33.032 "name": "pt3", 00:26:33.032 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:33.032 "is_configured": true, 00:26:33.032 "data_offset": 2048, 00:26:33.032 "data_size": 63488 00:26:33.032 } 00:26:33.032 ] 00:26:33.032 }' 00:26:33.032 12:10:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.032 12:10:38 -- common/autotest_common.sh@10 -- # set +x 00:26:33.598 12:10:39 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:26:33.598 12:10:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:33.598 12:10:39 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:33.857 [2024-11-29 12:10:39.298186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:33.857 [2024-11-29 12:10:39.298634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.857 [2024-11-29 12:10:39.298722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:33.857 [2024-11-29 12:10:39.298960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.857 [2024-11-29 12:10:39.299502] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.857 [2024-11-29 12:10:39.299669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:33.857 [2024-11-29 12:10:39.299882] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:33.857 [2024-11-29 12:10:39.300023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:33.858 [2024-11-29 12:10:39.300198] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:26:33.858 [2024-11-29 12:10:39.300315] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:33.858 [2024-11-29 12:10:39.300447] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:26:33.858 [2024-11-29 12:10:39.301231] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:26:33.858 [2024-11-29 12:10:39.301369] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:26:33.858 [2024-11-29 12:10:39.301654] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.858 pt2 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.858 12:10:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.116 12:10:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:34.116 "name": "raid_bdev1", 00:26:34.116 "uuid": "a3f72cf7-1ca7-4e85-a862-c416d425f27d", 00:26:34.116 "strip_size_kb": 64, 00:26:34.117 "state": "online", 00:26:34.117 "raid_level": "raid5f", 00:26:34.117 "superblock": true, 00:26:34.117 "num_base_bdevs": 3, 00:26:34.117 "num_base_bdevs_discovered": 2, 00:26:34.117 "num_base_bdevs_operational": 2, 00:26:34.117 "base_bdevs_list": [ 00:26:34.117 { 00:26:34.117 "name": null, 00:26:34.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.117 "is_configured": false, 00:26:34.117 "data_offset": 2048, 00:26:34.117 "data_size": 63488 00:26:34.117 }, 00:26:34.117 { 00:26:34.117 "name": "pt2", 00:26:34.117 "uuid": "e78c75ab-4409-5564-b082-f3d282031ca6", 00:26:34.117 "is_configured": true, 00:26:34.117 "data_offset": 2048, 00:26:34.117 "data_size": 63488 00:26:34.117 }, 00:26:34.117 { 00:26:34.117 "name": "pt3", 00:26:34.117 "uuid": "15f2c5a2-0721-5159-8432-90ce9327a953", 00:26:34.117 "is_configured": true, 00:26:34.117 "data_offset": 2048, 00:26:34.117 "data_size": 63488 00:26:34.117 } 00:26:34.117 ] 00:26:34.117 }' 00:26:34.117 12:10:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:34.117 12:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:35.068 12:10:40 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:35.068 12:10:40 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:35.068 [2024-11-29 12:10:40.427803] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:35.068 12:10:40 -- bdev/bdev_raid.sh@506 -- # '[' a3f72cf7-1ca7-4e85-a862-c416d425f27d '!=' a3f72cf7-1ca7-4e85-a862-c416d425f27d ']' 00:26:35.068 12:10:40 -- bdev/bdev_raid.sh@511 -- # killprocess 139134 00:26:35.068 12:10:40 -- common/autotest_common.sh@936 -- # '[' -z 139134 ']' 00:26:35.068 12:10:40 -- common/autotest_common.sh@940 -- # kill -0 139134 00:26:35.068 12:10:40 -- common/autotest_common.sh@941 -- # uname 00:26:35.068 12:10:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:35.068 12:10:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139134 00:26:35.068 12:10:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:35.068 12:10:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:35.068 12:10:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139134' 00:26:35.068 killing process with pid 139134 00:26:35.068 12:10:40 -- common/autotest_common.sh@955 -- # kill 139134 00:26:35.068 [2024-11-29 12:10:40.477978] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:35.068 12:10:40 -- common/autotest_common.sh@960 -- # wait 139134 00:26:35.068 [2024-11-29 12:10:40.478243] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.068 [2024-11-29 12:10:40.478482] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.068 [2024-11-29 12:10:40.478607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:26:35.068 [2024-11-29 12:10:40.521918] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:35.326 00:26:35.326 real 0m19.820s 00:26:35.326 user 0m36.965s 00:26:35.326 sys 0m2.690s 00:26:35.326 12:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:35.326 12:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.326 ************************************ 00:26:35.326 END TEST raid5f_superblock_test 00:26:35.326 ************************************ 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:26:35.326 12:10:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:35.326 12:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:35.326 12:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.326 ************************************ 00:26:35.326 START TEST raid5f_rebuild_test 00:26:35.326 ************************************ 00:26:35.326 12:10:40 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:35.326 12:10:40 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@544 -- # raid_pid=139747 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139747 /var/tmp/spdk-raid.sock 00:26:35.584 12:10:40 -- common/autotest_common.sh@829 -- # '[' -z 139747 ']' 00:26:35.584 12:10:40 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:35.584 12:10:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:35.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:35.584 12:10:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.584 12:10:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:35.584 12:10:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.584 12:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.584 [2024-11-29 12:10:40.893803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:35.584 [2024-11-29 12:10:40.894272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139747 ] 00:26:35.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:35.584 Zero copy mechanism will not be used. 00:26:35.584 [2024-11-29 12:10:41.040485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.841 [2024-11-29 12:10:41.137651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.841 [2024-11-29 12:10:41.193335] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:36.405 12:10:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.405 12:10:41 -- common/autotest_common.sh@862 -- # return 0 00:26:36.405 12:10:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:36.405 12:10:41 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:36.405 12:10:41 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:36.663 BaseBdev1 00:26:36.663 12:10:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:36.663 12:10:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:36.663 12:10:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:36.921 BaseBdev2 00:26:36.921 12:10:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:36.921 12:10:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:36.921 12:10:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:37.179 BaseBdev3 00:26:37.179 12:10:42 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:37.437 spare_malloc 00:26:37.437 12:10:42 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:37.694 spare_delay 00:26:37.694 12:10:43 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:37.952 [2024-11-29 12:10:43.339597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:37.952 [2024-11-29 12:10:43.340043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.952 [2024-11-29 12:10:43.340231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:37.952 [2024-11-29 12:10:43.340423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.952 [2024-11-29 12:10:43.343439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.952 [2024-11-29 12:10:43.343641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:37.952 spare 00:26:37.952 12:10:43 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:38.210 [2024-11-29 12:10:43.568135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:38.210 [2024-11-29 12:10:43.570743] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:38.210 [2024-11-29 12:10:43.570929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:38.210 [2024-11-29 12:10:43.571084] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:26:38.210 [2024-11-29 12:10:43.571135] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:38.210 [2024-11-29 12:10:43.571428] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:26:38.210 [2024-11-29 12:10:43.572444] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:26:38.210 [2024-11-29 12:10:43.572586] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:26:38.210 [2024-11-29 12:10:43.572972] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.211 12:10:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.469 12:10:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:38.469 "name": "raid_bdev1", 00:26:38.469 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:38.469 "strip_size_kb": 64, 00:26:38.469 "state": "online", 00:26:38.469 "raid_level": "raid5f", 00:26:38.469 "superblock": false, 00:26:38.469 "num_base_bdevs": 3, 00:26:38.469 "num_base_bdevs_discovered": 3, 00:26:38.469 "num_base_bdevs_operational": 3, 00:26:38.469 "base_bdevs_list": [ 00:26:38.469 { 00:26:38.469 "name": "BaseBdev1", 00:26:38.469 "uuid": "58b5cd26-e5ce-4081-aa2a-bb00c660f4fe", 00:26:38.469 "is_configured": true, 00:26:38.469 "data_offset": 0, 00:26:38.469 "data_size": 65536 00:26:38.469 }, 00:26:38.469 { 00:26:38.469 "name": "BaseBdev2", 00:26:38.469 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:38.469 "is_configured": true, 00:26:38.469 "data_offset": 0, 00:26:38.469 "data_size": 65536 00:26:38.469 }, 00:26:38.469 { 00:26:38.469 "name": "BaseBdev3", 00:26:38.469 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:38.469 "is_configured": true, 00:26:38.469 "data_offset": 0, 00:26:38.469 "data_size": 65536 00:26:38.469 } 00:26:38.469 ] 00:26:38.469 }' 00:26:38.469 12:10:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:38.469 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:39.405 12:10:44 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:39.405 12:10:44 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:39.405 [2024-11-29 12:10:44.833461] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:39.405 12:10:44 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:26:39.405 12:10:44 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.405 12:10:44 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:39.668 12:10:45 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:39.668 12:10:45 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:39.668 12:10:45 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:39.668 12:10:45 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@12 -- # local i 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:39.668 12:10:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:39.932 [2024-11-29 12:10:45.305404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:26:39.932 /dev/nbd0 00:26:39.932 12:10:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:39.932 12:10:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:39.932 12:10:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:39.932 12:10:45 -- common/autotest_common.sh@867 -- # local i 00:26:39.932 12:10:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:39.932 12:10:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:39.932 12:10:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:39.932 12:10:45 -- common/autotest_common.sh@871 -- # break 00:26:39.932 12:10:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:39.932 12:10:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:39.932 12:10:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.932 1+0 records in 00:26:39.932 1+0 records out 00:26:39.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436214 s, 9.4 MB/s 00:26:39.932 12:10:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.932 12:10:45 -- common/autotest_common.sh@884 -- # size=4096 00:26:39.932 12:10:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.932 12:10:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:39.932 12:10:45 -- common/autotest_common.sh@887 -- # return 0 00:26:39.932 12:10:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:39.932 12:10:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:39.932 12:10:45 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:39.932 12:10:45 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:26:39.932 12:10:45 -- bdev/bdev_raid.sh@582 -- # echo 128 00:26:39.932 12:10:45 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:26:40.499 512+0 records in 00:26:40.499 512+0 records out 00:26:40.499 67108864 bytes (67 MB, 64 MiB) copied, 0.367114 s, 183 MB/s 00:26:40.499 12:10:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:40.499 12:10:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:40.499 12:10:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:40.499 12:10:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:40.499 12:10:45 -- bdev/nbd_common.sh@51 -- # local i 00:26:40.499 12:10:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.499 12:10:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:40.758 12:10:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:40.758 [2024-11-29 12:10:46.032866] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.758 12:10:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:40.758 12:10:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:40.759 12:10:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.759 12:10:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.759 12:10:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:40.759 12:10:46 -- bdev/nbd_common.sh@41 -- # break 00:26:40.759 12:10:46 -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.759 12:10:46 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:41.024 [2024-11-29 12:10:46.296529] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:41.024 12:10:46 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:41.024 12:10:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:41.024 12:10:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:41.024 12:10:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.025 12:10:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.283 12:10:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:41.283 "name": "raid_bdev1", 00:26:41.283 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:41.283 "strip_size_kb": 64, 00:26:41.283 "state": "online", 00:26:41.283 "raid_level": "raid5f", 00:26:41.283 "superblock": false, 00:26:41.283 "num_base_bdevs": 3, 00:26:41.283 "num_base_bdevs_discovered": 2, 00:26:41.283 "num_base_bdevs_operational": 2, 00:26:41.283 "base_bdevs_list": [ 00:26:41.283 { 00:26:41.283 "name": null, 00:26:41.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.283 "is_configured": false, 00:26:41.283 "data_offset": 0, 00:26:41.283 "data_size": 65536 00:26:41.283 }, 00:26:41.283 { 00:26:41.283 "name": "BaseBdev2", 00:26:41.283 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:41.283 "is_configured": true, 00:26:41.283 "data_offset": 0, 00:26:41.283 "data_size": 65536 00:26:41.283 }, 00:26:41.283 { 00:26:41.283 "name": "BaseBdev3", 00:26:41.283 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:41.283 "is_configured": true, 00:26:41.283 "data_offset": 0, 00:26:41.283 "data_size": 65536 00:26:41.283 } 00:26:41.283 ] 00:26:41.283 }' 00:26:41.283 12:10:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:41.283 12:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:41.849 12:10:47 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:42.108 [2024-11-29 12:10:47.444719] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:42.108 [2024-11-29 12:10:47.445057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:42.108 [2024-11-29 12:10:47.450223] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027990 00:26:42.108 [2024-11-29 12:10:47.453042] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:42.108 12:10:47 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.043 12:10:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.300 12:10:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:43.300 "name": "raid_bdev1", 00:26:43.300 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:43.300 "strip_size_kb": 64, 00:26:43.300 "state": "online", 00:26:43.300 "raid_level": "raid5f", 00:26:43.300 "superblock": false, 00:26:43.300 "num_base_bdevs": 3, 00:26:43.300 "num_base_bdevs_discovered": 3, 00:26:43.300 "num_base_bdevs_operational": 3, 00:26:43.300 "process": { 00:26:43.300 "type": "rebuild", 00:26:43.300 "target": "spare", 00:26:43.300 "progress": { 00:26:43.300 "blocks": 24576, 00:26:43.300 "percent": 18 00:26:43.300 } 00:26:43.300 }, 00:26:43.300 "base_bdevs_list": [ 00:26:43.300 { 00:26:43.300 "name": "spare", 00:26:43.300 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:43.300 "is_configured": true, 00:26:43.300 "data_offset": 0, 00:26:43.300 "data_size": 65536 00:26:43.300 }, 00:26:43.300 { 00:26:43.300 "name": "BaseBdev2", 00:26:43.300 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:43.300 "is_configured": true, 00:26:43.300 "data_offset": 0, 00:26:43.300 "data_size": 65536 00:26:43.300 }, 00:26:43.300 { 00:26:43.300 "name": "BaseBdev3", 00:26:43.300 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:43.300 "is_configured": true, 00:26:43.300 "data_offset": 0, 00:26:43.300 "data_size": 65536 00:26:43.300 } 00:26:43.300 ] 00:26:43.300 }' 00:26:43.300 12:10:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:43.300 12:10:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.300 12:10:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:43.558 12:10:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.558 12:10:48 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:43.817 [2024-11-29 12:10:49.091336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:43.817 [2024-11-29 12:10:49.171711] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:43.817 [2024-11-29 12:10:49.172146] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.817 12:10:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.074 12:10:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:44.074 "name": "raid_bdev1", 00:26:44.074 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:44.074 "strip_size_kb": 64, 00:26:44.074 "state": "online", 00:26:44.074 "raid_level": "raid5f", 00:26:44.074 "superblock": false, 00:26:44.074 "num_base_bdevs": 3, 00:26:44.074 "num_base_bdevs_discovered": 2, 00:26:44.074 "num_base_bdevs_operational": 2, 00:26:44.075 "base_bdevs_list": [ 00:26:44.075 { 00:26:44.075 "name": null, 00:26:44.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.075 "is_configured": false, 00:26:44.075 "data_offset": 0, 00:26:44.075 "data_size": 65536 00:26:44.075 }, 00:26:44.075 { 00:26:44.075 "name": "BaseBdev2", 00:26:44.075 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:44.075 "is_configured": true, 00:26:44.075 "data_offset": 0, 00:26:44.075 "data_size": 65536 00:26:44.075 }, 00:26:44.075 { 00:26:44.075 "name": "BaseBdev3", 00:26:44.075 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:44.075 "is_configured": true, 00:26:44.075 "data_offset": 0, 00:26:44.075 "data_size": 65536 00:26:44.075 } 00:26:44.075 ] 00:26:44.075 }' 00:26:44.075 12:10:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:44.075 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:26:44.639 12:10:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:44.639 12:10:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:44.639 12:10:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:44.639 12:10:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:44.639 12:10:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:44.640 12:10:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.640 12:10:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.899 12:10:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:44.899 "name": "raid_bdev1", 00:26:44.899 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:44.899 "strip_size_kb": 64, 00:26:44.899 "state": "online", 00:26:44.899 "raid_level": "raid5f", 00:26:44.899 "superblock": false, 00:26:44.899 "num_base_bdevs": 3, 00:26:44.899 "num_base_bdevs_discovered": 2, 00:26:44.899 "num_base_bdevs_operational": 2, 00:26:44.899 "base_bdevs_list": [ 00:26:44.899 { 00:26:44.899 "name": null, 00:26:44.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.899 "is_configured": false, 00:26:44.899 "data_offset": 0, 00:26:44.899 "data_size": 65536 00:26:44.899 }, 00:26:44.899 { 00:26:44.899 "name": "BaseBdev2", 00:26:44.899 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:44.899 "is_configured": true, 00:26:44.899 "data_offset": 0, 00:26:44.899 "data_size": 65536 00:26:44.899 }, 00:26:44.899 { 00:26:44.899 "name": "BaseBdev3", 00:26:44.899 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:44.899 "is_configured": true, 00:26:44.899 "data_offset": 0, 00:26:44.899 "data_size": 65536 00:26:44.899 } 00:26:44.899 ] 00:26:44.899 }' 00:26:44.899 12:10:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:44.899 12:10:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:44.899 12:10:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:44.899 12:10:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:44.899 12:10:50 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:45.158 [2024-11-29 12:10:50.651506] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:45.158 [2024-11-29 12:10:50.651857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:45.158 [2024-11-29 12:10:50.656938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027b30 00:26:45.158 [2024-11-29 12:10:50.659733] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:45.417 12:10:50 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.353 12:10:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.611 12:10:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:46.611 "name": "raid_bdev1", 00:26:46.611 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:46.611 "strip_size_kb": 64, 00:26:46.611 "state": "online", 00:26:46.611 "raid_level": "raid5f", 00:26:46.611 "superblock": false, 00:26:46.611 "num_base_bdevs": 3, 00:26:46.611 "num_base_bdevs_discovered": 3, 00:26:46.611 "num_base_bdevs_operational": 3, 00:26:46.611 "process": { 00:26:46.611 "type": "rebuild", 00:26:46.611 "target": "spare", 00:26:46.611 "progress": { 00:26:46.611 "blocks": 24576, 00:26:46.611 "percent": 18 00:26:46.611 } 00:26:46.611 }, 00:26:46.611 "base_bdevs_list": [ 00:26:46.611 { 00:26:46.611 "name": "spare", 00:26:46.611 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:46.611 "is_configured": true, 00:26:46.611 "data_offset": 0, 00:26:46.611 "data_size": 65536 00:26:46.611 }, 00:26:46.611 { 00:26:46.611 "name": "BaseBdev2", 00:26:46.611 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:46.611 "is_configured": true, 00:26:46.611 "data_offset": 0, 00:26:46.611 "data_size": 65536 00:26:46.611 }, 00:26:46.611 { 00:26:46.611 "name": "BaseBdev3", 00:26:46.611 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:46.611 "is_configured": true, 00:26:46.611 "data_offset": 0, 00:26:46.611 "data_size": 65536 00:26:46.611 } 00:26:46.611 ] 00:26:46.611 }' 00:26:46.611 12:10:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@657 -- # local timeout=626 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.611 12:10:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.898 12:10:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:46.898 "name": "raid_bdev1", 00:26:46.898 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:46.898 "strip_size_kb": 64, 00:26:46.898 "state": "online", 00:26:46.898 "raid_level": "raid5f", 00:26:46.898 "superblock": false, 00:26:46.898 "num_base_bdevs": 3, 00:26:46.899 "num_base_bdevs_discovered": 3, 00:26:46.899 "num_base_bdevs_operational": 3, 00:26:46.899 "process": { 00:26:46.899 "type": "rebuild", 00:26:46.899 "target": "spare", 00:26:46.899 "progress": { 00:26:46.899 "blocks": 32768, 00:26:46.899 "percent": 25 00:26:46.899 } 00:26:46.899 }, 00:26:46.899 "base_bdevs_list": [ 00:26:46.899 { 00:26:46.899 "name": "spare", 00:26:46.899 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:46.899 "is_configured": true, 00:26:46.899 "data_offset": 0, 00:26:46.899 "data_size": 65536 00:26:46.899 }, 00:26:46.899 { 00:26:46.899 "name": "BaseBdev2", 00:26:46.899 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:46.899 "is_configured": true, 00:26:46.899 "data_offset": 0, 00:26:46.899 "data_size": 65536 00:26:46.899 }, 00:26:46.899 { 00:26:46.899 "name": "BaseBdev3", 00:26:46.899 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:46.899 "is_configured": true, 00:26:46.899 "data_offset": 0, 00:26:46.899 "data_size": 65536 00:26:46.899 } 00:26:46.899 ] 00:26:46.899 }' 00:26:46.899 12:10:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:46.899 12:10:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.899 12:10:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:47.157 12:10:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:47.157 12:10:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.094 12:10:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.353 12:10:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:48.353 "name": "raid_bdev1", 00:26:48.353 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:48.353 "strip_size_kb": 64, 00:26:48.353 "state": "online", 00:26:48.353 "raid_level": "raid5f", 00:26:48.353 "superblock": false, 00:26:48.353 "num_base_bdevs": 3, 00:26:48.353 "num_base_bdevs_discovered": 3, 00:26:48.353 "num_base_bdevs_operational": 3, 00:26:48.353 "process": { 00:26:48.353 "type": "rebuild", 00:26:48.353 "target": "spare", 00:26:48.353 "progress": { 00:26:48.353 "blocks": 59392, 00:26:48.353 "percent": 45 00:26:48.353 } 00:26:48.353 }, 00:26:48.353 "base_bdevs_list": [ 00:26:48.353 { 00:26:48.353 "name": "spare", 00:26:48.353 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:48.353 "is_configured": true, 00:26:48.353 "data_offset": 0, 00:26:48.353 "data_size": 65536 00:26:48.353 }, 00:26:48.353 { 00:26:48.353 "name": "BaseBdev2", 00:26:48.353 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:48.353 "is_configured": true, 00:26:48.353 "data_offset": 0, 00:26:48.353 "data_size": 65536 00:26:48.353 }, 00:26:48.353 { 00:26:48.353 "name": "BaseBdev3", 00:26:48.353 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:48.353 "is_configured": true, 00:26:48.353 "data_offset": 0, 00:26:48.353 "data_size": 65536 00:26:48.353 } 00:26:48.353 ] 00:26:48.353 }' 00:26:48.353 12:10:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:48.353 12:10:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.353 12:10:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:48.353 12:10:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.353 12:10:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.289 12:10:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.854 12:10:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:49.854 "name": "raid_bdev1", 00:26:49.854 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:49.854 "strip_size_kb": 64, 00:26:49.854 "state": "online", 00:26:49.854 "raid_level": "raid5f", 00:26:49.854 "superblock": false, 00:26:49.854 "num_base_bdevs": 3, 00:26:49.854 "num_base_bdevs_discovered": 3, 00:26:49.854 "num_base_bdevs_operational": 3, 00:26:49.854 "process": { 00:26:49.854 "type": "rebuild", 00:26:49.854 "target": "spare", 00:26:49.854 "progress": { 00:26:49.854 "blocks": 88064, 00:26:49.854 "percent": 67 00:26:49.854 } 00:26:49.854 }, 00:26:49.854 "base_bdevs_list": [ 00:26:49.854 { 00:26:49.854 "name": "spare", 00:26:49.854 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:49.854 "is_configured": true, 00:26:49.854 "data_offset": 0, 00:26:49.854 "data_size": 65536 00:26:49.854 }, 00:26:49.854 { 00:26:49.854 "name": "BaseBdev2", 00:26:49.854 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:49.854 "is_configured": true, 00:26:49.854 "data_offset": 0, 00:26:49.854 "data_size": 65536 00:26:49.854 }, 00:26:49.854 { 00:26:49.854 "name": "BaseBdev3", 00:26:49.854 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:49.854 "is_configured": true, 00:26:49.854 "data_offset": 0, 00:26:49.854 "data_size": 65536 00:26:49.854 } 00:26:49.854 ] 00:26:49.854 }' 00:26:49.854 12:10:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:49.854 12:10:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.854 12:10:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:49.855 12:10:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.855 12:10:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.789 12:10:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.047 12:10:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:51.047 "name": "raid_bdev1", 00:26:51.047 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:51.047 "strip_size_kb": 64, 00:26:51.047 "state": "online", 00:26:51.047 "raid_level": "raid5f", 00:26:51.047 "superblock": false, 00:26:51.047 "num_base_bdevs": 3, 00:26:51.047 "num_base_bdevs_discovered": 3, 00:26:51.047 "num_base_bdevs_operational": 3, 00:26:51.047 "process": { 00:26:51.047 "type": "rebuild", 00:26:51.047 "target": "spare", 00:26:51.047 "progress": { 00:26:51.047 "blocks": 114688, 00:26:51.047 "percent": 87 00:26:51.047 } 00:26:51.047 }, 00:26:51.047 "base_bdevs_list": [ 00:26:51.047 { 00:26:51.047 "name": "spare", 00:26:51.047 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:51.047 "is_configured": true, 00:26:51.047 "data_offset": 0, 00:26:51.047 "data_size": 65536 00:26:51.047 }, 00:26:51.047 { 00:26:51.047 "name": "BaseBdev2", 00:26:51.047 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:51.047 "is_configured": true, 00:26:51.047 "data_offset": 0, 00:26:51.047 "data_size": 65536 00:26:51.047 }, 00:26:51.047 { 00:26:51.047 "name": "BaseBdev3", 00:26:51.047 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:51.047 "is_configured": true, 00:26:51.047 "data_offset": 0, 00:26:51.047 "data_size": 65536 00:26:51.047 } 00:26:51.047 ] 00:26:51.047 }' 00:26:51.047 12:10:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:51.047 12:10:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.047 12:10:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:51.047 12:10:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.047 12:10:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:51.613 [2024-11-29 12:10:57.126935] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:51.613 [2024-11-29 12:10:57.127032] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:51.872 [2024-11-29 12:10:57.127137] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.130 12:10:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:52.404 "name": "raid_bdev1", 00:26:52.404 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:52.404 "strip_size_kb": 64, 00:26:52.404 "state": "online", 00:26:52.404 "raid_level": "raid5f", 00:26:52.404 "superblock": false, 00:26:52.404 "num_base_bdevs": 3, 00:26:52.404 "num_base_bdevs_discovered": 3, 00:26:52.404 "num_base_bdevs_operational": 3, 00:26:52.404 "base_bdevs_list": [ 00:26:52.404 { 00:26:52.404 "name": "spare", 00:26:52.404 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:52.404 "is_configured": true, 00:26:52.404 "data_offset": 0, 00:26:52.404 "data_size": 65536 00:26:52.404 }, 00:26:52.404 { 00:26:52.404 "name": "BaseBdev2", 00:26:52.404 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:52.404 "is_configured": true, 00:26:52.404 "data_offset": 0, 00:26:52.404 "data_size": 65536 00:26:52.404 }, 00:26:52.404 { 00:26:52.404 "name": "BaseBdev3", 00:26:52.404 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:52.404 "is_configured": true, 00:26:52.404 "data_offset": 0, 00:26:52.404 "data_size": 65536 00:26:52.404 } 00:26:52.404 ] 00:26:52.404 }' 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@660 -- # break 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.404 12:10:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.661 12:10:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:52.661 "name": "raid_bdev1", 00:26:52.661 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:52.661 "strip_size_kb": 64, 00:26:52.661 "state": "online", 00:26:52.661 "raid_level": "raid5f", 00:26:52.661 "superblock": false, 00:26:52.661 "num_base_bdevs": 3, 00:26:52.661 "num_base_bdevs_discovered": 3, 00:26:52.661 "num_base_bdevs_operational": 3, 00:26:52.661 "base_bdevs_list": [ 00:26:52.661 { 00:26:52.661 "name": "spare", 00:26:52.661 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:52.661 "is_configured": true, 00:26:52.661 "data_offset": 0, 00:26:52.661 "data_size": 65536 00:26:52.661 }, 00:26:52.661 { 00:26:52.661 "name": "BaseBdev2", 00:26:52.661 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:52.661 "is_configured": true, 00:26:52.661 "data_offset": 0, 00:26:52.661 "data_size": 65536 00:26:52.661 }, 00:26:52.661 { 00:26:52.661 "name": "BaseBdev3", 00:26:52.661 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:52.661 "is_configured": true, 00:26:52.661 "data_offset": 0, 00:26:52.661 "data_size": 65536 00:26:52.661 } 00:26:52.661 ] 00:26:52.661 }' 00:26:52.661 12:10:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:52.661 12:10:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.661 12:10:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.918 12:10:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.176 12:10:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:53.176 "name": "raid_bdev1", 00:26:53.176 "uuid": "940c300a-fa3c-44ad-b059-d04ed078e906", 00:26:53.176 "strip_size_kb": 64, 00:26:53.176 "state": "online", 00:26:53.176 "raid_level": "raid5f", 00:26:53.176 "superblock": false, 00:26:53.176 "num_base_bdevs": 3, 00:26:53.176 "num_base_bdevs_discovered": 3, 00:26:53.176 "num_base_bdevs_operational": 3, 00:26:53.176 "base_bdevs_list": [ 00:26:53.176 { 00:26:53.176 "name": "spare", 00:26:53.176 "uuid": "1d4381a8-5704-5c36-8f51-0fbb2bf20380", 00:26:53.176 "is_configured": true, 00:26:53.176 "data_offset": 0, 00:26:53.176 "data_size": 65536 00:26:53.176 }, 00:26:53.176 { 00:26:53.176 "name": "BaseBdev2", 00:26:53.176 "uuid": "7c1d9534-76ee-471b-bf61-fe0a1257d5ee", 00:26:53.176 "is_configured": true, 00:26:53.176 "data_offset": 0, 00:26:53.176 "data_size": 65536 00:26:53.176 }, 00:26:53.176 { 00:26:53.176 "name": "BaseBdev3", 00:26:53.176 "uuid": "ca2c4332-561d-4d7c-be76-fa093703758f", 00:26:53.176 "is_configured": true, 00:26:53.176 "data_offset": 0, 00:26:53.176 "data_size": 65536 00:26:53.176 } 00:26:53.176 ] 00:26:53.176 }' 00:26:53.176 12:10:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:53.176 12:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.743 12:10:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:54.002 [2024-11-29 12:10:59.326052] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:54.002 [2024-11-29 12:10:59.326101] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:54.002 [2024-11-29 12:10:59.326243] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.002 [2024-11-29 12:10:59.326338] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.002 [2024-11-29 12:10:59.326371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:26:54.002 12:10:59 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:54.002 12:10:59 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.260 12:10:59 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:54.260 12:10:59 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:54.260 12:10:59 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@12 -- # local i 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.260 12:10:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:54.519 /dev/nbd0 00:26:54.519 12:10:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:54.519 12:10:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:54.519 12:10:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:54.519 12:10:59 -- common/autotest_common.sh@867 -- # local i 00:26:54.519 12:10:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:54.519 12:10:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:54.519 12:10:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:54.519 12:10:59 -- common/autotest_common.sh@871 -- # break 00:26:54.519 12:10:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:54.519 12:10:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:54.519 12:10:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:54.519 1+0 records in 00:26:54.519 1+0 records out 00:26:54.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313787 s, 13.1 MB/s 00:26:54.519 12:10:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.519 12:10:59 -- common/autotest_common.sh@884 -- # size=4096 00:26:54.519 12:10:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.519 12:10:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:54.519 12:10:59 -- common/autotest_common.sh@887 -- # return 0 00:26:54.519 12:10:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.519 12:10:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.519 12:10:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:54.779 /dev/nbd1 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:54.779 12:11:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:54.779 12:11:00 -- common/autotest_common.sh@867 -- # local i 00:26:54.779 12:11:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:54.779 12:11:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:54.779 12:11:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:54.779 12:11:00 -- common/autotest_common.sh@871 -- # break 00:26:54.779 12:11:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:54.779 12:11:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:54.779 12:11:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:54.779 1+0 records in 00:26:54.779 1+0 records out 00:26:54.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379466 s, 10.8 MB/s 00:26:54.779 12:11:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.779 12:11:00 -- common/autotest_common.sh@884 -- # size=4096 00:26:54.779 12:11:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.779 12:11:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:54.779 12:11:00 -- common/autotest_common.sh@887 -- # return 0 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.779 12:11:00 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:54.779 12:11:00 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@51 -- # local i 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:54.779 12:11:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@41 -- # break 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@45 -- # return 0 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.038 12:11:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@41 -- # break 00:26:55.296 12:11:00 -- bdev/nbd_common.sh@45 -- # return 0 00:26:55.296 12:11:00 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:26:55.296 12:11:00 -- bdev/bdev_raid.sh@709 -- # killprocess 139747 00:26:55.296 12:11:00 -- common/autotest_common.sh@936 -- # '[' -z 139747 ']' 00:26:55.296 12:11:00 -- common/autotest_common.sh@940 -- # kill -0 139747 00:26:55.296 12:11:00 -- common/autotest_common.sh@941 -- # uname 00:26:55.296 12:11:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:55.296 12:11:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139747 00:26:55.296 12:11:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:55.296 12:11:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:55.296 killing process with pid 139747 00:26:55.296 12:11:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139747' 00:26:55.296 12:11:00 -- common/autotest_common.sh@955 -- # kill 139747 00:26:55.296 Received shutdown signal, test time was about 60.000000 seconds 00:26:55.296 00:26:55.296 Latency(us) 00:26:55.296 [2024-11-29T12:11:00.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.296 [2024-11-29T12:11:00.807Z] =================================================================================================================== 00:26:55.296 [2024-11-29T12:11:00.808Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:55.297 [2024-11-29 12:11:00.747227] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:55.297 12:11:00 -- common/autotest_common.sh@960 -- # wait 139747 00:26:55.297 [2024-11-29 12:11:00.795930] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:55.863 00:26:55.863 real 0m20.240s 00:26:55.863 user 0m31.365s 00:26:55.863 sys 0m2.403s 00:26:55.863 12:11:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:55.863 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:26:55.863 ************************************ 00:26:55.863 END TEST raid5f_rebuild_test 00:26:55.863 ************************************ 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:26:55.863 12:11:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:55.863 12:11:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:55.863 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:26:55.863 ************************************ 00:26:55.863 START TEST raid5f_rebuild_test_sb 00:26:55.863 ************************************ 00:26:55.863 12:11:01 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:55.863 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=140287 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140287 /var/tmp/spdk-raid.sock 00:26:55.864 12:11:01 -- common/autotest_common.sh@829 -- # '[' -z 140287 ']' 00:26:55.864 12:11:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:55.864 12:11:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:55.864 12:11:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.864 12:11:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:55.864 12:11:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.864 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:26:55.864 [2024-11-29 12:11:01.187186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:55.864 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:55.864 Zero copy mechanism will not be used. 00:26:55.864 [2024-11-29 12:11:01.187448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140287 ] 00:26:55.864 [2024-11-29 12:11:01.337083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.122 [2024-11-29 12:11:01.436997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.122 [2024-11-29 12:11:01.494220] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.056 12:11:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.056 12:11:02 -- common/autotest_common.sh@862 -- # return 0 00:26:57.056 12:11:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:57.056 12:11:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:57.056 12:11:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:57.056 BaseBdev1_malloc 00:26:57.056 12:11:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:57.315 [2024-11-29 12:11:02.800708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:57.315 [2024-11-29 12:11:02.800845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.315 [2024-11-29 12:11:02.800900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:26:57.315 [2024-11-29 12:11:02.800954] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.315 [2024-11-29 12:11:02.803718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.315 [2024-11-29 12:11:02.803786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:57.315 BaseBdev1 00:26:57.315 12:11:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:57.315 12:11:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:57.315 12:11:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:57.574 BaseBdev2_malloc 00:26:57.574 12:11:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:57.833 [2024-11-29 12:11:03.275863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:57.833 [2024-11-29 12:11:03.275981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.833 [2024-11-29 12:11:03.276032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:57.833 [2024-11-29 12:11:03.276081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.833 [2024-11-29 12:11:03.278666] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.833 [2024-11-29 12:11:03.278724] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:57.833 BaseBdev2 00:26:57.833 12:11:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:57.833 12:11:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:57.833 12:11:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:58.090 BaseBdev3_malloc 00:26:58.090 12:11:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:58.353 [2024-11-29 12:11:03.775359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:58.353 [2024-11-29 12:11:03.775475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.353 [2024-11-29 12:11:03.775530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:58.353 [2024-11-29 12:11:03.775591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.353 [2024-11-29 12:11:03.778185] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.353 [2024-11-29 12:11:03.778247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:58.353 BaseBdev3 00:26:58.353 12:11:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:58.611 spare_malloc 00:26:58.611 12:11:04 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:58.870 spare_delay 00:26:58.870 12:11:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:59.128 [2024-11-29 12:11:04.526701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:59.128 [2024-11-29 12:11:04.526825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.128 [2024-11-29 12:11:04.526874] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:59.128 [2024-11-29 12:11:04.526935] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.128 [2024-11-29 12:11:04.529609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.128 [2024-11-29 12:11:04.529671] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:59.128 spare 00:26:59.128 12:11:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:59.387 [2024-11-29 12:11:04.778873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:59.387 [2024-11-29 12:11:04.781172] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:59.387 [2024-11-29 12:11:04.781263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:59.387 [2024-11-29 12:11:04.781508] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:26:59.387 [2024-11-29 12:11:04.781525] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:59.387 [2024-11-29 12:11:04.781709] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:59.387 [2024-11-29 12:11:04.782586] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:26:59.387 [2024-11-29 12:11:04.782615] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:26:59.387 [2024-11-29 12:11:04.782831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.387 12:11:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.645 12:11:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:59.645 "name": "raid_bdev1", 00:26:59.645 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:26:59.645 "strip_size_kb": 64, 00:26:59.645 "state": "online", 00:26:59.645 "raid_level": "raid5f", 00:26:59.645 "superblock": true, 00:26:59.645 "num_base_bdevs": 3, 00:26:59.645 "num_base_bdevs_discovered": 3, 00:26:59.645 "num_base_bdevs_operational": 3, 00:26:59.645 "base_bdevs_list": [ 00:26:59.645 { 00:26:59.645 "name": "BaseBdev1", 00:26:59.645 "uuid": "e69af1f4-8288-5f75-9e26-b7a87734a2ef", 00:26:59.645 "is_configured": true, 00:26:59.645 "data_offset": 2048, 00:26:59.645 "data_size": 63488 00:26:59.645 }, 00:26:59.645 { 00:26:59.645 "name": "BaseBdev2", 00:26:59.646 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:26:59.646 "is_configured": true, 00:26:59.646 "data_offset": 2048, 00:26:59.646 "data_size": 63488 00:26:59.646 }, 00:26:59.646 { 00:26:59.646 "name": "BaseBdev3", 00:26:59.646 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:26:59.646 "is_configured": true, 00:26:59.646 "data_offset": 2048, 00:26:59.646 "data_size": 63488 00:26:59.646 } 00:26:59.646 ] 00:26:59.646 }' 00:26:59.646 12:11:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:59.646 12:11:05 -- common/autotest_common.sh@10 -- # set +x 00:27:00.211 12:11:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:27:00.211 12:11:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:00.469 [2024-11-29 12:11:05.883222] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:00.469 12:11:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:27:00.469 12:11:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:00.469 12:11:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.727 12:11:06 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:27:00.727 12:11:06 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:27:00.727 12:11:06 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:27:00.727 12:11:06 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@12 -- # local i 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:00.727 12:11:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:00.984 [2024-11-29 12:11:06.387251] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:00.984 /dev/nbd0 00:27:00.984 12:11:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:00.984 12:11:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:00.984 12:11:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:00.984 12:11:06 -- common/autotest_common.sh@867 -- # local i 00:27:00.984 12:11:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:00.984 12:11:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:00.984 12:11:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:00.984 12:11:06 -- common/autotest_common.sh@871 -- # break 00:27:00.984 12:11:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:00.984 12:11:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:00.984 12:11:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:00.984 1+0 records in 00:27:00.984 1+0 records out 00:27:00.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380838 s, 10.8 MB/s 00:27:00.984 12:11:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:00.984 12:11:06 -- common/autotest_common.sh@884 -- # size=4096 00:27:00.984 12:11:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:00.984 12:11:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:00.984 12:11:06 -- common/autotest_common.sh@887 -- # return 0 00:27:00.984 12:11:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:00.984 12:11:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:00.984 12:11:06 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:27:00.984 12:11:06 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:27:00.984 12:11:06 -- bdev/bdev_raid.sh@582 -- # echo 128 00:27:00.984 12:11:06 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:27:01.549 496+0 records in 00:27:01.549 496+0 records out 00:27:01.549 65011712 bytes (65 MB, 62 MiB) copied, 0.367854 s, 177 MB/s 00:27:01.549 12:11:06 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:01.549 12:11:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:01.549 12:11:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:01.549 12:11:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:01.549 12:11:06 -- bdev/nbd_common.sh@51 -- # local i 00:27:01.549 12:11:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.549 12:11:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:01.806 [2024-11-29 12:11:07.083941] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@41 -- # break 00:27:01.806 12:11:07 -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:01.806 [2024-11-29 12:11:07.299648] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:01.806 12:11:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.064 12:11:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:02.064 "name": "raid_bdev1", 00:27:02.064 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:02.064 "strip_size_kb": 64, 00:27:02.064 "state": "online", 00:27:02.064 "raid_level": "raid5f", 00:27:02.064 "superblock": true, 00:27:02.064 "num_base_bdevs": 3, 00:27:02.064 "num_base_bdevs_discovered": 2, 00:27:02.064 "num_base_bdevs_operational": 2, 00:27:02.064 "base_bdevs_list": [ 00:27:02.064 { 00:27:02.064 "name": null, 00:27:02.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.064 "is_configured": false, 00:27:02.064 "data_offset": 2048, 00:27:02.064 "data_size": 63488 00:27:02.064 }, 00:27:02.064 { 00:27:02.064 "name": "BaseBdev2", 00:27:02.064 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:02.064 "is_configured": true, 00:27:02.064 "data_offset": 2048, 00:27:02.064 "data_size": 63488 00:27:02.064 }, 00:27:02.064 { 00:27:02.064 "name": "BaseBdev3", 00:27:02.064 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:02.064 "is_configured": true, 00:27:02.065 "data_offset": 2048, 00:27:02.065 "data_size": 63488 00:27:02.065 } 00:27:02.065 ] 00:27:02.065 }' 00:27:02.065 12:11:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:02.065 12:11:07 -- common/autotest_common.sh@10 -- # set +x 00:27:02.997 12:11:08 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:02.998 [2024-11-29 12:11:08.387895] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:02.998 [2024-11-29 12:11:08.387964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:02.998 [2024-11-29 12:11:08.392914] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:27:02.998 [2024-11-29 12:11:08.395662] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:02.998 12:11:08 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.931 12:11:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.190 12:11:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:04.190 "name": "raid_bdev1", 00:27:04.190 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:04.190 "strip_size_kb": 64, 00:27:04.190 "state": "online", 00:27:04.190 "raid_level": "raid5f", 00:27:04.190 "superblock": true, 00:27:04.190 "num_base_bdevs": 3, 00:27:04.190 "num_base_bdevs_discovered": 3, 00:27:04.190 "num_base_bdevs_operational": 3, 00:27:04.190 "process": { 00:27:04.190 "type": "rebuild", 00:27:04.190 "target": "spare", 00:27:04.190 "progress": { 00:27:04.190 "blocks": 24576, 00:27:04.190 "percent": 19 00:27:04.190 } 00:27:04.190 }, 00:27:04.190 "base_bdevs_list": [ 00:27:04.190 { 00:27:04.190 "name": "spare", 00:27:04.190 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:04.190 "is_configured": true, 00:27:04.190 "data_offset": 2048, 00:27:04.190 "data_size": 63488 00:27:04.190 }, 00:27:04.190 { 00:27:04.190 "name": "BaseBdev2", 00:27:04.190 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:04.190 "is_configured": true, 00:27:04.190 "data_offset": 2048, 00:27:04.190 "data_size": 63488 00:27:04.190 }, 00:27:04.190 { 00:27:04.190 "name": "BaseBdev3", 00:27:04.190 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:04.190 "is_configured": true, 00:27:04.190 "data_offset": 2048, 00:27:04.190 "data_size": 63488 00:27:04.190 } 00:27:04.190 ] 00:27:04.190 }' 00:27:04.190 12:11:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:04.450 12:11:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:04.450 12:11:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:04.450 12:11:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:04.450 12:11:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:04.713 [2024-11-29 12:11:09.966084] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:04.713 [2024-11-29 12:11:10.013658] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:04.713 [2024-11-29 12:11:10.013794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.713 12:11:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.972 12:11:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:04.972 "name": "raid_bdev1", 00:27:04.972 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:04.972 "strip_size_kb": 64, 00:27:04.972 "state": "online", 00:27:04.972 "raid_level": "raid5f", 00:27:04.972 "superblock": true, 00:27:04.972 "num_base_bdevs": 3, 00:27:04.972 "num_base_bdevs_discovered": 2, 00:27:04.972 "num_base_bdevs_operational": 2, 00:27:04.972 "base_bdevs_list": [ 00:27:04.972 { 00:27:04.972 "name": null, 00:27:04.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.972 "is_configured": false, 00:27:04.972 "data_offset": 2048, 00:27:04.972 "data_size": 63488 00:27:04.972 }, 00:27:04.972 { 00:27:04.972 "name": "BaseBdev2", 00:27:04.972 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:04.972 "is_configured": true, 00:27:04.972 "data_offset": 2048, 00:27:04.972 "data_size": 63488 00:27:04.972 }, 00:27:04.972 { 00:27:04.972 "name": "BaseBdev3", 00:27:04.972 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:04.972 "is_configured": true, 00:27:04.972 "data_offset": 2048, 00:27:04.972 "data_size": 63488 00:27:04.972 } 00:27:04.972 ] 00:27:04.972 }' 00:27:04.972 12:11:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:04.972 12:11:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.540 12:11:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.799 12:11:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:05.799 "name": "raid_bdev1", 00:27:05.799 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:05.799 "strip_size_kb": 64, 00:27:05.799 "state": "online", 00:27:05.799 "raid_level": "raid5f", 00:27:05.799 "superblock": true, 00:27:05.799 "num_base_bdevs": 3, 00:27:05.799 "num_base_bdevs_discovered": 2, 00:27:05.799 "num_base_bdevs_operational": 2, 00:27:05.799 "base_bdevs_list": [ 00:27:05.799 { 00:27:05.799 "name": null, 00:27:05.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.799 "is_configured": false, 00:27:05.799 "data_offset": 2048, 00:27:05.799 "data_size": 63488 00:27:05.799 }, 00:27:05.799 { 00:27:05.799 "name": "BaseBdev2", 00:27:05.799 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:05.799 "is_configured": true, 00:27:05.799 "data_offset": 2048, 00:27:05.799 "data_size": 63488 00:27:05.799 }, 00:27:05.799 { 00:27:05.799 "name": "BaseBdev3", 00:27:05.799 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:05.799 "is_configured": true, 00:27:05.799 "data_offset": 2048, 00:27:05.799 "data_size": 63488 00:27:05.799 } 00:27:05.799 ] 00:27:05.799 }' 00:27:05.799 12:11:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:05.799 12:11:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:05.799 12:11:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:05.799 12:11:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:05.799 12:11:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:06.059 [2024-11-29 12:11:11.517257] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:06.059 [2024-11-29 12:11:11.517321] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:06.059 [2024-11-29 12:11:11.522288] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:27:06.059 [2024-11-29 12:11:11.524869] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:06.059 12:11:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:07.438 "name": "raid_bdev1", 00:27:07.438 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:07.438 "strip_size_kb": 64, 00:27:07.438 "state": "online", 00:27:07.438 "raid_level": "raid5f", 00:27:07.438 "superblock": true, 00:27:07.438 "num_base_bdevs": 3, 00:27:07.438 "num_base_bdevs_discovered": 3, 00:27:07.438 "num_base_bdevs_operational": 3, 00:27:07.438 "process": { 00:27:07.438 "type": "rebuild", 00:27:07.438 "target": "spare", 00:27:07.438 "progress": { 00:27:07.438 "blocks": 24576, 00:27:07.438 "percent": 19 00:27:07.438 } 00:27:07.438 }, 00:27:07.438 "base_bdevs_list": [ 00:27:07.438 { 00:27:07.438 "name": "spare", 00:27:07.438 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:07.438 "is_configured": true, 00:27:07.438 "data_offset": 2048, 00:27:07.438 "data_size": 63488 00:27:07.438 }, 00:27:07.438 { 00:27:07.438 "name": "BaseBdev2", 00:27:07.438 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:07.438 "is_configured": true, 00:27:07.438 "data_offset": 2048, 00:27:07.438 "data_size": 63488 00:27:07.438 }, 00:27:07.438 { 00:27:07.438 "name": "BaseBdev3", 00:27:07.438 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:07.438 "is_configured": true, 00:27:07.438 "data_offset": 2048, 00:27:07.438 "data_size": 63488 00:27:07.438 } 00:27:07.438 ] 00:27:07.438 }' 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:27:07.438 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@657 -- # local timeout=646 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.438 12:11:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.696 12:11:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:07.696 "name": "raid_bdev1", 00:27:07.696 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:07.696 "strip_size_kb": 64, 00:27:07.696 "state": "online", 00:27:07.696 "raid_level": "raid5f", 00:27:07.696 "superblock": true, 00:27:07.696 "num_base_bdevs": 3, 00:27:07.696 "num_base_bdevs_discovered": 3, 00:27:07.696 "num_base_bdevs_operational": 3, 00:27:07.696 "process": { 00:27:07.696 "type": "rebuild", 00:27:07.696 "target": "spare", 00:27:07.696 "progress": { 00:27:07.696 "blocks": 32768, 00:27:07.696 "percent": 25 00:27:07.696 } 00:27:07.696 }, 00:27:07.696 "base_bdevs_list": [ 00:27:07.696 { 00:27:07.696 "name": "spare", 00:27:07.696 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:07.696 "is_configured": true, 00:27:07.696 "data_offset": 2048, 00:27:07.696 "data_size": 63488 00:27:07.696 }, 00:27:07.696 { 00:27:07.696 "name": "BaseBdev2", 00:27:07.696 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:07.696 "is_configured": true, 00:27:07.696 "data_offset": 2048, 00:27:07.696 "data_size": 63488 00:27:07.696 }, 00:27:07.696 { 00:27:07.696 "name": "BaseBdev3", 00:27:07.696 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:07.696 "is_configured": true, 00:27:07.696 "data_offset": 2048, 00:27:07.696 "data_size": 63488 00:27:07.696 } 00:27:07.696 ] 00:27:07.696 }' 00:27:07.696 12:11:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:07.953 12:11:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.953 12:11:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:07.953 12:11:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.954 12:11:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.887 12:11:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.144 12:11:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:09.144 "name": "raid_bdev1", 00:27:09.144 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:09.144 "strip_size_kb": 64, 00:27:09.144 "state": "online", 00:27:09.144 "raid_level": "raid5f", 00:27:09.144 "superblock": true, 00:27:09.144 "num_base_bdevs": 3, 00:27:09.144 "num_base_bdevs_discovered": 3, 00:27:09.144 "num_base_bdevs_operational": 3, 00:27:09.144 "process": { 00:27:09.144 "type": "rebuild", 00:27:09.144 "target": "spare", 00:27:09.144 "progress": { 00:27:09.144 "blocks": 59392, 00:27:09.144 "percent": 46 00:27:09.144 } 00:27:09.144 }, 00:27:09.144 "base_bdevs_list": [ 00:27:09.144 { 00:27:09.144 "name": "spare", 00:27:09.144 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:09.145 "is_configured": true, 00:27:09.145 "data_offset": 2048, 00:27:09.145 "data_size": 63488 00:27:09.145 }, 00:27:09.145 { 00:27:09.145 "name": "BaseBdev2", 00:27:09.145 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:09.145 "is_configured": true, 00:27:09.145 "data_offset": 2048, 00:27:09.145 "data_size": 63488 00:27:09.145 }, 00:27:09.145 { 00:27:09.145 "name": "BaseBdev3", 00:27:09.145 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:09.145 "is_configured": true, 00:27:09.145 "data_offset": 2048, 00:27:09.145 "data_size": 63488 00:27:09.145 } 00:27:09.145 ] 00:27:09.145 }' 00:27:09.145 12:11:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:09.145 12:11:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.145 12:11:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:09.403 12:11:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.403 12:11:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.337 12:11:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.597 12:11:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:10.597 "name": "raid_bdev1", 00:27:10.597 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:10.597 "strip_size_kb": 64, 00:27:10.597 "state": "online", 00:27:10.597 "raid_level": "raid5f", 00:27:10.597 "superblock": true, 00:27:10.597 "num_base_bdevs": 3, 00:27:10.597 "num_base_bdevs_discovered": 3, 00:27:10.597 "num_base_bdevs_operational": 3, 00:27:10.597 "process": { 00:27:10.597 "type": "rebuild", 00:27:10.597 "target": "spare", 00:27:10.597 "progress": { 00:27:10.597 "blocks": 88064, 00:27:10.597 "percent": 69 00:27:10.597 } 00:27:10.597 }, 00:27:10.597 "base_bdevs_list": [ 00:27:10.597 { 00:27:10.597 "name": "spare", 00:27:10.597 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:10.597 "is_configured": true, 00:27:10.597 "data_offset": 2048, 00:27:10.597 "data_size": 63488 00:27:10.597 }, 00:27:10.597 { 00:27:10.597 "name": "BaseBdev2", 00:27:10.597 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:10.597 "is_configured": true, 00:27:10.597 "data_offset": 2048, 00:27:10.597 "data_size": 63488 00:27:10.597 }, 00:27:10.597 { 00:27:10.597 "name": "BaseBdev3", 00:27:10.597 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:10.597 "is_configured": true, 00:27:10.597 "data_offset": 2048, 00:27:10.597 "data_size": 63488 00:27:10.597 } 00:27:10.597 ] 00:27:10.597 }' 00:27:10.597 12:11:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:10.597 12:11:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:10.597 12:11:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:10.597 12:11:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:10.597 12:11:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.541 12:11:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.799 12:11:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:11.799 "name": "raid_bdev1", 00:27:11.799 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:11.799 "strip_size_kb": 64, 00:27:11.799 "state": "online", 00:27:11.799 "raid_level": "raid5f", 00:27:11.799 "superblock": true, 00:27:11.799 "num_base_bdevs": 3, 00:27:11.799 "num_base_bdevs_discovered": 3, 00:27:11.799 "num_base_bdevs_operational": 3, 00:27:11.799 "process": { 00:27:11.799 "type": "rebuild", 00:27:11.799 "target": "spare", 00:27:11.799 "progress": { 00:27:11.799 "blocks": 114688, 00:27:11.799 "percent": 90 00:27:11.799 } 00:27:11.799 }, 00:27:11.799 "base_bdevs_list": [ 00:27:11.799 { 00:27:11.799 "name": "spare", 00:27:11.799 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:11.799 "is_configured": true, 00:27:11.799 "data_offset": 2048, 00:27:11.799 "data_size": 63488 00:27:11.799 }, 00:27:11.799 { 00:27:11.799 "name": "BaseBdev2", 00:27:11.799 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:11.799 "is_configured": true, 00:27:11.799 "data_offset": 2048, 00:27:11.799 "data_size": 63488 00:27:11.799 }, 00:27:11.799 { 00:27:11.799 "name": "BaseBdev3", 00:27:11.799 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:11.799 "is_configured": true, 00:27:11.799 "data_offset": 2048, 00:27:11.799 "data_size": 63488 00:27:11.799 } 00:27:11.799 ] 00:27:11.799 }' 00:27:11.799 12:11:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:12.058 12:11:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:12.058 12:11:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:12.058 12:11:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:12.058 12:11:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:12.316 [2024-11-29 12:11:17.793335] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:12.316 [2024-11-29 12:11:17.793452] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:12.316 [2024-11-29 12:11:17.793660] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.880 12:11:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.137 12:11:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:13.137 "name": "raid_bdev1", 00:27:13.137 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:13.137 "strip_size_kb": 64, 00:27:13.137 "state": "online", 00:27:13.137 "raid_level": "raid5f", 00:27:13.137 "superblock": true, 00:27:13.137 "num_base_bdevs": 3, 00:27:13.137 "num_base_bdevs_discovered": 3, 00:27:13.137 "num_base_bdevs_operational": 3, 00:27:13.137 "base_bdevs_list": [ 00:27:13.137 { 00:27:13.137 "name": "spare", 00:27:13.137 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:13.137 "is_configured": true, 00:27:13.137 "data_offset": 2048, 00:27:13.137 "data_size": 63488 00:27:13.137 }, 00:27:13.137 { 00:27:13.137 "name": "BaseBdev2", 00:27:13.137 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:13.137 "is_configured": true, 00:27:13.137 "data_offset": 2048, 00:27:13.137 "data_size": 63488 00:27:13.137 }, 00:27:13.137 { 00:27:13.137 "name": "BaseBdev3", 00:27:13.137 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:13.137 "is_configured": true, 00:27:13.137 "data_offset": 2048, 00:27:13.137 "data_size": 63488 00:27:13.137 } 00:27:13.137 ] 00:27:13.137 }' 00:27:13.137 12:11:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@660 -- # break 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.394 12:11:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.650 12:11:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:13.650 "name": "raid_bdev1", 00:27:13.650 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:13.650 "strip_size_kb": 64, 00:27:13.650 "state": "online", 00:27:13.650 "raid_level": "raid5f", 00:27:13.650 "superblock": true, 00:27:13.650 "num_base_bdevs": 3, 00:27:13.650 "num_base_bdevs_discovered": 3, 00:27:13.650 "num_base_bdevs_operational": 3, 00:27:13.650 "base_bdevs_list": [ 00:27:13.650 { 00:27:13.650 "name": "spare", 00:27:13.650 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:13.650 "is_configured": true, 00:27:13.650 "data_offset": 2048, 00:27:13.650 "data_size": 63488 00:27:13.650 }, 00:27:13.650 { 00:27:13.650 "name": "BaseBdev2", 00:27:13.650 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:13.650 "is_configured": true, 00:27:13.650 "data_offset": 2048, 00:27:13.650 "data_size": 63488 00:27:13.650 }, 00:27:13.650 { 00:27:13.650 "name": "BaseBdev3", 00:27:13.650 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:13.650 "is_configured": true, 00:27:13.650 "data_offset": 2048, 00:27:13.650 "data_size": 63488 00:27:13.650 } 00:27:13.650 ] 00:27:13.650 }' 00:27:13.650 12:11:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:13.650 12:11:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:13.650 12:11:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:13.650 12:11:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:13.650 12:11:19 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:13.650 12:11:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:13.650 12:11:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.651 12:11:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.908 12:11:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:13.908 "name": "raid_bdev1", 00:27:13.908 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:13.908 "strip_size_kb": 64, 00:27:13.908 "state": "online", 00:27:13.908 "raid_level": "raid5f", 00:27:13.908 "superblock": true, 00:27:13.908 "num_base_bdevs": 3, 00:27:13.908 "num_base_bdevs_discovered": 3, 00:27:13.908 "num_base_bdevs_operational": 3, 00:27:13.908 "base_bdevs_list": [ 00:27:13.908 { 00:27:13.908 "name": "spare", 00:27:13.908 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:13.908 "is_configured": true, 00:27:13.908 "data_offset": 2048, 00:27:13.908 "data_size": 63488 00:27:13.908 }, 00:27:13.908 { 00:27:13.908 "name": "BaseBdev2", 00:27:13.908 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:13.908 "is_configured": true, 00:27:13.908 "data_offset": 2048, 00:27:13.908 "data_size": 63488 00:27:13.908 }, 00:27:13.908 { 00:27:13.908 "name": "BaseBdev3", 00:27:13.908 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:13.908 "is_configured": true, 00:27:13.908 "data_offset": 2048, 00:27:13.908 "data_size": 63488 00:27:13.908 } 00:27:13.908 ] 00:27:13.908 }' 00:27:13.908 12:11:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:13.908 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:27:14.838 12:11:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:14.838 [2024-11-29 12:11:20.280758] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:14.838 [2024-11-29 12:11:20.280811] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:14.838 [2024-11-29 12:11:20.280916] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:14.838 [2024-11-29 12:11:20.281019] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:14.838 [2024-11-29 12:11:20.281034] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:27:14.838 12:11:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.838 12:11:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:15.096 12:11:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:15.096 12:11:20 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:15.096 12:11:20 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@12 -- # local i 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:15.096 12:11:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:15.354 /dev/nbd0 00:27:15.354 12:11:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:15.354 12:11:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:15.354 12:11:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:15.354 12:11:20 -- common/autotest_common.sh@867 -- # local i 00:27:15.354 12:11:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:15.354 12:11:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:15.354 12:11:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:15.354 12:11:20 -- common/autotest_common.sh@871 -- # break 00:27:15.354 12:11:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:15.354 12:11:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:15.354 12:11:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:15.354 1+0 records in 00:27:15.354 1+0 records out 00:27:15.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325111 s, 12.6 MB/s 00:27:15.354 12:11:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.354 12:11:20 -- common/autotest_common.sh@884 -- # size=4096 00:27:15.354 12:11:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.354 12:11:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:15.354 12:11:20 -- common/autotest_common.sh@887 -- # return 0 00:27:15.354 12:11:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:15.354 12:11:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:15.354 12:11:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:15.920 /dev/nbd1 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:15.920 12:11:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:15.920 12:11:21 -- common/autotest_common.sh@867 -- # local i 00:27:15.920 12:11:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:15.920 12:11:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:15.920 12:11:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:15.920 12:11:21 -- common/autotest_common.sh@871 -- # break 00:27:15.920 12:11:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:15.920 12:11:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:15.920 12:11:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:15.920 1+0 records in 00:27:15.920 1+0 records out 00:27:15.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327203 s, 12.5 MB/s 00:27:15.920 12:11:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.920 12:11:21 -- common/autotest_common.sh@884 -- # size=4096 00:27:15.920 12:11:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.920 12:11:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:15.920 12:11:21 -- common/autotest_common.sh@887 -- # return 0 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:15.920 12:11:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:15.920 12:11:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@51 -- # local i 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.920 12:11:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@41 -- # break 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@45 -- # return 0 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:16.178 12:11:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@41 -- # break 00:27:16.437 12:11:21 -- bdev/nbd_common.sh@45 -- # return 0 00:27:16.437 12:11:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:27:16.437 12:11:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:16.437 12:11:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:27:16.437 12:11:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:16.696 12:11:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:16.954 [2024-11-29 12:11:22.391972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:16.954 [2024-11-29 12:11:22.392096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.954 [2024-11-29 12:11:22.392170] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:16.954 [2024-11-29 12:11:22.392204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.954 [2024-11-29 12:11:22.394808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.954 [2024-11-29 12:11:22.394885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:16.954 [2024-11-29 12:11:22.394994] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:16.954 [2024-11-29 12:11:22.395067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.954 BaseBdev1 00:27:16.954 12:11:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:16.954 12:11:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:27:16.954 12:11:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:27:17.212 12:11:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:17.470 [2024-11-29 12:11:22.964081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:17.470 [2024-11-29 12:11:22.964176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.470 [2024-11-29 12:11:22.964226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:17.470 [2024-11-29 12:11:22.964259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.470 [2024-11-29 12:11:22.964751] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.470 [2024-11-29 12:11:22.964811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:17.470 [2024-11-29 12:11:22.964910] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:27:17.470 [2024-11-29 12:11:22.964927] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:27:17.470 [2024-11-29 12:11:22.964935] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:17.471 [2024-11-29 12:11:22.964970] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:27:17.471 [2024-11-29 12:11:22.965033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.471 BaseBdev2 00:27:17.471 12:11:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:17.471 12:11:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:27:17.471 12:11:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:27:17.729 12:11:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:17.987 [2024-11-29 12:11:23.420212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:17.987 [2024-11-29 12:11:23.420312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.987 [2024-11-29 12:11:23.420363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:17.987 [2024-11-29 12:11:23.420404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.987 [2024-11-29 12:11:23.420885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.987 [2024-11-29 12:11:23.420937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:17.987 [2024-11-29 12:11:23.421030] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:27:17.987 [2024-11-29 12:11:23.421065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:17.987 BaseBdev3 00:27:17.987 12:11:23 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:18.246 12:11:23 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:18.505 [2024-11-29 12:11:23.872313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:18.505 [2024-11-29 12:11:23.872421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.505 [2024-11-29 12:11:23.872474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:18.505 [2024-11-29 12:11:23.872509] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.505 [2024-11-29 12:11:23.873013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.505 [2024-11-29 12:11:23.873075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:18.505 [2024-11-29 12:11:23.873177] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:27:18.505 [2024-11-29 12:11:23.873215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:18.505 spare 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.505 12:11:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.505 [2024-11-29 12:11:23.973350] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:18.505 [2024-11-29 12:11:23.973396] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:18.505 [2024-11-29 12:11:23.973606] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044230 00:27:18.505 [2024-11-29 12:11:23.974507] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:18.505 [2024-11-29 12:11:23.974534] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:18.505 [2024-11-29 12:11:23.974721] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.763 12:11:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:18.763 "name": "raid_bdev1", 00:27:18.763 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:18.763 "strip_size_kb": 64, 00:27:18.763 "state": "online", 00:27:18.763 "raid_level": "raid5f", 00:27:18.763 "superblock": true, 00:27:18.763 "num_base_bdevs": 3, 00:27:18.763 "num_base_bdevs_discovered": 3, 00:27:18.763 "num_base_bdevs_operational": 3, 00:27:18.763 "base_bdevs_list": [ 00:27:18.763 { 00:27:18.763 "name": "spare", 00:27:18.763 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:18.763 "is_configured": true, 00:27:18.763 "data_offset": 2048, 00:27:18.763 "data_size": 63488 00:27:18.763 }, 00:27:18.763 { 00:27:18.763 "name": "BaseBdev2", 00:27:18.763 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:18.763 "is_configured": true, 00:27:18.763 "data_offset": 2048, 00:27:18.763 "data_size": 63488 00:27:18.763 }, 00:27:18.763 { 00:27:18.763 "name": "BaseBdev3", 00:27:18.763 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:18.763 "is_configured": true, 00:27:18.764 "data_offset": 2048, 00:27:18.764 "data_size": 63488 00:27:18.764 } 00:27:18.764 ] 00:27:18.764 }' 00:27:18.764 12:11:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:18.764 12:11:24 -- common/autotest_common.sh@10 -- # set +x 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.332 12:11:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.590 12:11:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:19.590 "name": "raid_bdev1", 00:27:19.590 "uuid": "662c3795-9ba1-492a-a815-730ce72ca872", 00:27:19.590 "strip_size_kb": 64, 00:27:19.590 "state": "online", 00:27:19.590 "raid_level": "raid5f", 00:27:19.590 "superblock": true, 00:27:19.590 "num_base_bdevs": 3, 00:27:19.590 "num_base_bdevs_discovered": 3, 00:27:19.590 "num_base_bdevs_operational": 3, 00:27:19.590 "base_bdevs_list": [ 00:27:19.590 { 00:27:19.590 "name": "spare", 00:27:19.590 "uuid": "de4c67c0-ecb6-5ab9-a66b-6fa51d73bd1a", 00:27:19.590 "is_configured": true, 00:27:19.590 "data_offset": 2048, 00:27:19.590 "data_size": 63488 00:27:19.590 }, 00:27:19.590 { 00:27:19.590 "name": "BaseBdev2", 00:27:19.590 "uuid": "0c0a7bfc-02c8-529c-ab00-761adf49826d", 00:27:19.590 "is_configured": true, 00:27:19.590 "data_offset": 2048, 00:27:19.590 "data_size": 63488 00:27:19.590 }, 00:27:19.590 { 00:27:19.590 "name": "BaseBdev3", 00:27:19.590 "uuid": "0aa7ce17-ddad-5408-9146-e8d8276bd11c", 00:27:19.590 "is_configured": true, 00:27:19.590 "data_offset": 2048, 00:27:19.590 "data_size": 63488 00:27:19.590 } 00:27:19.590 ] 00:27:19.590 }' 00:27:19.590 12:11:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:19.590 12:11:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:19.590 12:11:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:19.846 12:11:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:19.846 12:11:25 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.846 12:11:25 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:20.104 12:11:25 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.104 12:11:25 -- bdev/bdev_raid.sh@709 -- # killprocess 140287 00:27:20.104 12:11:25 -- common/autotest_common.sh@936 -- # '[' -z 140287 ']' 00:27:20.104 12:11:25 -- common/autotest_common.sh@940 -- # kill -0 140287 00:27:20.104 12:11:25 -- common/autotest_common.sh@941 -- # uname 00:27:20.104 12:11:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:20.104 12:11:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140287 00:27:20.104 killing process with pid 140287 00:27:20.104 Received shutdown signal, test time was about 60.000000 seconds 00:27:20.104 00:27:20.104 Latency(us) 00:27:20.104 [2024-11-29T12:11:25.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.104 [2024-11-29T12:11:25.615Z] =================================================================================================================== 00:27:20.104 [2024-11-29T12:11:25.615Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:20.104 12:11:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:20.104 12:11:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:20.104 12:11:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140287' 00:27:20.104 12:11:25 -- common/autotest_common.sh@955 -- # kill 140287 00:27:20.104 12:11:25 -- common/autotest_common.sh@960 -- # wait 140287 00:27:20.104 [2024-11-29 12:11:25.431817] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:20.104 [2024-11-29 12:11:25.431927] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.104 [2024-11-29 12:11:25.432015] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.104 [2024-11-29 12:11:25.432027] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:20.104 [2024-11-29 12:11:25.481030] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:20.362 ************************************ 00:27:20.362 END TEST raid5f_rebuild_test_sb 00:27:20.362 ************************************ 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:20.362 00:27:20.362 real 0m24.609s 00:27:20.362 user 0m39.557s 00:27:20.362 sys 0m3.042s 00:27:20.362 12:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:20.362 12:11:25 -- common/autotest_common.sh@10 -- # set +x 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:27:20.362 12:11:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:20.362 12:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:20.362 12:11:25 -- common/autotest_common.sh@10 -- # set +x 00:27:20.362 ************************************ 00:27:20.362 START TEST raid5f_state_function_test 00:27:20.362 ************************************ 00:27:20.362 12:11:25 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=140922 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140922' 00:27:20.362 Process raid pid: 140922 00:27:20.362 12:11:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140922 /var/tmp/spdk-raid.sock 00:27:20.362 12:11:25 -- common/autotest_common.sh@829 -- # '[' -z 140922 ']' 00:27:20.362 12:11:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:20.362 12:11:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.362 12:11:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:20.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:20.362 12:11:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.362 12:11:25 -- common/autotest_common.sh@10 -- # set +x 00:27:20.362 [2024-11-29 12:11:25.872789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:20.362 [2024-11-29 12:11:25.873149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.621 [2024-11-29 12:11:26.033767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.621 [2024-11-29 12:11:26.125183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.879 [2024-11-29 12:11:26.178705] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.447 12:11:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.447 12:11:26 -- common/autotest_common.sh@862 -- # return 0 00:27:21.447 12:11:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:21.707 [2024-11-29 12:11:27.056498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:21.707 [2024-11-29 12:11:27.056601] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:21.707 [2024-11-29 12:11:27.056616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:21.707 [2024-11-29 12:11:27.056639] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:21.707 [2024-11-29 12:11:27.056648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:21.707 [2024-11-29 12:11:27.056700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:21.707 [2024-11-29 12:11:27.056710] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:21.707 [2024-11-29 12:11:27.056739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.707 12:11:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:21.966 12:11:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:21.966 "name": "Existed_Raid", 00:27:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.966 "strip_size_kb": 64, 00:27:21.966 "state": "configuring", 00:27:21.966 "raid_level": "raid5f", 00:27:21.966 "superblock": false, 00:27:21.966 "num_base_bdevs": 4, 00:27:21.966 "num_base_bdevs_discovered": 0, 00:27:21.966 "num_base_bdevs_operational": 4, 00:27:21.966 "base_bdevs_list": [ 00:27:21.966 { 00:27:21.966 "name": "BaseBdev1", 00:27:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.966 "is_configured": false, 00:27:21.966 "data_offset": 0, 00:27:21.966 "data_size": 0 00:27:21.966 }, 00:27:21.966 { 00:27:21.966 "name": "BaseBdev2", 00:27:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.966 "is_configured": false, 00:27:21.966 "data_offset": 0, 00:27:21.966 "data_size": 0 00:27:21.966 }, 00:27:21.966 { 00:27:21.966 "name": "BaseBdev3", 00:27:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.966 "is_configured": false, 00:27:21.966 "data_offset": 0, 00:27:21.966 "data_size": 0 00:27:21.966 }, 00:27:21.966 { 00:27:21.966 "name": "BaseBdev4", 00:27:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.966 "is_configured": false, 00:27:21.966 "data_offset": 0, 00:27:21.966 "data_size": 0 00:27:21.966 } 00:27:21.966 ] 00:27:21.966 }' 00:27:21.966 12:11:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:21.966 12:11:27 -- common/autotest_common.sh@10 -- # set +x 00:27:22.533 12:11:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:22.792 [2024-11-29 12:11:28.216559] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:22.792 [2024-11-29 12:11:28.216618] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:27:22.792 12:11:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:23.051 [2024-11-29 12:11:28.480667] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:23.052 [2024-11-29 12:11:28.480754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:23.052 [2024-11-29 12:11:28.480767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:23.052 [2024-11-29 12:11:28.480797] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:23.052 [2024-11-29 12:11:28.480806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:23.052 [2024-11-29 12:11:28.480831] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:23.052 [2024-11-29 12:11:28.480839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:23.052 [2024-11-29 12:11:28.480866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:23.052 12:11:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:23.310 [2024-11-29 12:11:28.756100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:23.310 BaseBdev1 00:27:23.310 12:11:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:23.310 12:11:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:23.310 12:11:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:23.310 12:11:28 -- common/autotest_common.sh@899 -- # local i 00:27:23.310 12:11:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:23.310 12:11:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:23.310 12:11:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:23.568 12:11:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:23.827 [ 00:27:23.827 { 00:27:23.827 "name": "BaseBdev1", 00:27:23.827 "aliases": [ 00:27:23.827 "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58" 00:27:23.827 ], 00:27:23.827 "product_name": "Malloc disk", 00:27:23.827 "block_size": 512, 00:27:23.827 "num_blocks": 65536, 00:27:23.827 "uuid": "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58", 00:27:23.827 "assigned_rate_limits": { 00:27:23.827 "rw_ios_per_sec": 0, 00:27:23.827 "rw_mbytes_per_sec": 0, 00:27:23.827 "r_mbytes_per_sec": 0, 00:27:23.827 "w_mbytes_per_sec": 0 00:27:23.827 }, 00:27:23.827 "claimed": true, 00:27:23.827 "claim_type": "exclusive_write", 00:27:23.827 "zoned": false, 00:27:23.827 "supported_io_types": { 00:27:23.827 "read": true, 00:27:23.827 "write": true, 00:27:23.827 "unmap": true, 00:27:23.827 "write_zeroes": true, 00:27:23.827 "flush": true, 00:27:23.827 "reset": true, 00:27:23.827 "compare": false, 00:27:23.827 "compare_and_write": false, 00:27:23.827 "abort": true, 00:27:23.827 "nvme_admin": false, 00:27:23.827 "nvme_io": false 00:27:23.827 }, 00:27:23.827 "memory_domains": [ 00:27:23.827 { 00:27:23.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.827 "dma_device_type": 2 00:27:23.827 } 00:27:23.827 ], 00:27:23.827 "driver_specific": {} 00:27:23.827 } 00:27:23.827 ] 00:27:23.827 12:11:29 -- common/autotest_common.sh@905 -- # return 0 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.827 12:11:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:24.085 12:11:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:24.085 "name": "Existed_Raid", 00:27:24.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.085 "strip_size_kb": 64, 00:27:24.085 "state": "configuring", 00:27:24.085 "raid_level": "raid5f", 00:27:24.085 "superblock": false, 00:27:24.085 "num_base_bdevs": 4, 00:27:24.085 "num_base_bdevs_discovered": 1, 00:27:24.085 "num_base_bdevs_operational": 4, 00:27:24.085 "base_bdevs_list": [ 00:27:24.085 { 00:27:24.085 "name": "BaseBdev1", 00:27:24.085 "uuid": "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58", 00:27:24.085 "is_configured": true, 00:27:24.085 "data_offset": 0, 00:27:24.085 "data_size": 65536 00:27:24.085 }, 00:27:24.085 { 00:27:24.085 "name": "BaseBdev2", 00:27:24.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.085 "is_configured": false, 00:27:24.085 "data_offset": 0, 00:27:24.085 "data_size": 0 00:27:24.085 }, 00:27:24.085 { 00:27:24.085 "name": "BaseBdev3", 00:27:24.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.085 "is_configured": false, 00:27:24.085 "data_offset": 0, 00:27:24.085 "data_size": 0 00:27:24.085 }, 00:27:24.085 { 00:27:24.085 "name": "BaseBdev4", 00:27:24.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.085 "is_configured": false, 00:27:24.085 "data_offset": 0, 00:27:24.085 "data_size": 0 00:27:24.085 } 00:27:24.085 ] 00:27:24.085 }' 00:27:24.085 12:11:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:24.085 12:11:29 -- common/autotest_common.sh@10 -- # set +x 00:27:24.650 12:11:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:24.909 [2024-11-29 12:11:30.340462] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:24.909 [2024-11-29 12:11:30.340546] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:27:24.909 12:11:30 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:27:24.909 12:11:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:25.167 [2024-11-29 12:11:30.572630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:25.167 [2024-11-29 12:11:30.574896] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:25.167 [2024-11-29 12:11:30.574987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:25.167 [2024-11-29 12:11:30.575002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:25.168 [2024-11-29 12:11:30.575030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:25.168 [2024-11-29 12:11:30.575040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:25.168 [2024-11-29 12:11:30.575059] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.168 12:11:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.426 12:11:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:25.426 "name": "Existed_Raid", 00:27:25.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.426 "strip_size_kb": 64, 00:27:25.426 "state": "configuring", 00:27:25.426 "raid_level": "raid5f", 00:27:25.426 "superblock": false, 00:27:25.426 "num_base_bdevs": 4, 00:27:25.426 "num_base_bdevs_discovered": 1, 00:27:25.426 "num_base_bdevs_operational": 4, 00:27:25.426 "base_bdevs_list": [ 00:27:25.426 { 00:27:25.426 "name": "BaseBdev1", 00:27:25.426 "uuid": "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58", 00:27:25.426 "is_configured": true, 00:27:25.426 "data_offset": 0, 00:27:25.426 "data_size": 65536 00:27:25.426 }, 00:27:25.426 { 00:27:25.426 "name": "BaseBdev2", 00:27:25.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.426 "is_configured": false, 00:27:25.426 "data_offset": 0, 00:27:25.426 "data_size": 0 00:27:25.426 }, 00:27:25.426 { 00:27:25.426 "name": "BaseBdev3", 00:27:25.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.426 "is_configured": false, 00:27:25.426 "data_offset": 0, 00:27:25.426 "data_size": 0 00:27:25.426 }, 00:27:25.426 { 00:27:25.426 "name": "BaseBdev4", 00:27:25.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.426 "is_configured": false, 00:27:25.426 "data_offset": 0, 00:27:25.426 "data_size": 0 00:27:25.426 } 00:27:25.426 ] 00:27:25.426 }' 00:27:25.426 12:11:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:25.426 12:11:30 -- common/autotest_common.sh@10 -- # set +x 00:27:26.363 12:11:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:26.363 [2024-11-29 12:11:31.762484] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.363 BaseBdev2 00:27:26.363 12:11:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:26.363 12:11:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:26.363 12:11:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:26.363 12:11:31 -- common/autotest_common.sh@899 -- # local i 00:27:26.363 12:11:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:26.363 12:11:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:26.363 12:11:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:26.622 12:11:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:26.914 [ 00:27:26.914 { 00:27:26.914 "name": "BaseBdev2", 00:27:26.914 "aliases": [ 00:27:26.914 "7733e2cf-ee26-4623-bd24-b4df649ad547" 00:27:26.914 ], 00:27:26.914 "product_name": "Malloc disk", 00:27:26.914 "block_size": 512, 00:27:26.914 "num_blocks": 65536, 00:27:26.914 "uuid": "7733e2cf-ee26-4623-bd24-b4df649ad547", 00:27:26.914 "assigned_rate_limits": { 00:27:26.914 "rw_ios_per_sec": 0, 00:27:26.914 "rw_mbytes_per_sec": 0, 00:27:26.914 "r_mbytes_per_sec": 0, 00:27:26.914 "w_mbytes_per_sec": 0 00:27:26.914 }, 00:27:26.914 "claimed": true, 00:27:26.914 "claim_type": "exclusive_write", 00:27:26.914 "zoned": false, 00:27:26.914 "supported_io_types": { 00:27:26.914 "read": true, 00:27:26.914 "write": true, 00:27:26.914 "unmap": true, 00:27:26.914 "write_zeroes": true, 00:27:26.914 "flush": true, 00:27:26.914 "reset": true, 00:27:26.914 "compare": false, 00:27:26.914 "compare_and_write": false, 00:27:26.914 "abort": true, 00:27:26.914 "nvme_admin": false, 00:27:26.914 "nvme_io": false 00:27:26.914 }, 00:27:26.914 "memory_domains": [ 00:27:26.914 { 00:27:26.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.914 "dma_device_type": 2 00:27:26.914 } 00:27:26.914 ], 00:27:26.914 "driver_specific": {} 00:27:26.914 } 00:27:26.914 ] 00:27:26.914 12:11:32 -- common/autotest_common.sh@905 -- # return 0 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.914 12:11:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.172 12:11:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:27.172 "name": "Existed_Raid", 00:27:27.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.172 "strip_size_kb": 64, 00:27:27.172 "state": "configuring", 00:27:27.172 "raid_level": "raid5f", 00:27:27.172 "superblock": false, 00:27:27.172 "num_base_bdevs": 4, 00:27:27.172 "num_base_bdevs_discovered": 2, 00:27:27.172 "num_base_bdevs_operational": 4, 00:27:27.172 "base_bdevs_list": [ 00:27:27.172 { 00:27:27.172 "name": "BaseBdev1", 00:27:27.172 "uuid": "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58", 00:27:27.172 "is_configured": true, 00:27:27.173 "data_offset": 0, 00:27:27.173 "data_size": 65536 00:27:27.173 }, 00:27:27.173 { 00:27:27.173 "name": "BaseBdev2", 00:27:27.173 "uuid": "7733e2cf-ee26-4623-bd24-b4df649ad547", 00:27:27.173 "is_configured": true, 00:27:27.173 "data_offset": 0, 00:27:27.173 "data_size": 65536 00:27:27.173 }, 00:27:27.173 { 00:27:27.173 "name": "BaseBdev3", 00:27:27.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.173 "is_configured": false, 00:27:27.173 "data_offset": 0, 00:27:27.173 "data_size": 0 00:27:27.173 }, 00:27:27.173 { 00:27:27.173 "name": "BaseBdev4", 00:27:27.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.173 "is_configured": false, 00:27:27.173 "data_offset": 0, 00:27:27.173 "data_size": 0 00:27:27.173 } 00:27:27.173 ] 00:27:27.173 }' 00:27:27.173 12:11:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:27.173 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:27.739 12:11:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:27.997 [2024-11-29 12:11:33.404729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:27.997 BaseBdev3 00:27:27.997 12:11:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:27.997 12:11:33 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:27.997 12:11:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:27.997 12:11:33 -- common/autotest_common.sh@899 -- # local i 00:27:27.997 12:11:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:27.997 12:11:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:27.997 12:11:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:28.256 12:11:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:28.514 [ 00:27:28.514 { 00:27:28.514 "name": "BaseBdev3", 00:27:28.514 "aliases": [ 00:27:28.514 "ac867570-1d24-490d-81af-5a9fbba9f5a4" 00:27:28.514 ], 00:27:28.514 "product_name": "Malloc disk", 00:27:28.514 "block_size": 512, 00:27:28.514 "num_blocks": 65536, 00:27:28.514 "uuid": "ac867570-1d24-490d-81af-5a9fbba9f5a4", 00:27:28.514 "assigned_rate_limits": { 00:27:28.514 "rw_ios_per_sec": 0, 00:27:28.514 "rw_mbytes_per_sec": 0, 00:27:28.514 "r_mbytes_per_sec": 0, 00:27:28.514 "w_mbytes_per_sec": 0 00:27:28.514 }, 00:27:28.514 "claimed": true, 00:27:28.514 "claim_type": "exclusive_write", 00:27:28.514 "zoned": false, 00:27:28.514 "supported_io_types": { 00:27:28.514 "read": true, 00:27:28.514 "write": true, 00:27:28.514 "unmap": true, 00:27:28.514 "write_zeroes": true, 00:27:28.514 "flush": true, 00:27:28.514 "reset": true, 00:27:28.514 "compare": false, 00:27:28.514 "compare_and_write": false, 00:27:28.514 "abort": true, 00:27:28.514 "nvme_admin": false, 00:27:28.514 "nvme_io": false 00:27:28.514 }, 00:27:28.514 "memory_domains": [ 00:27:28.514 { 00:27:28.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.514 "dma_device_type": 2 00:27:28.514 } 00:27:28.514 ], 00:27:28.514 "driver_specific": {} 00:27:28.514 } 00:27:28.514 ] 00:27:28.514 12:11:33 -- common/autotest_common.sh@905 -- # return 0 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.514 12:11:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.772 12:11:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:28.772 "name": "Existed_Raid", 00:27:28.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.772 "strip_size_kb": 64, 00:27:28.772 "state": "configuring", 00:27:28.772 "raid_level": "raid5f", 00:27:28.772 "superblock": false, 00:27:28.772 "num_base_bdevs": 4, 00:27:28.772 "num_base_bdevs_discovered": 3, 00:27:28.772 "num_base_bdevs_operational": 4, 00:27:28.772 "base_bdevs_list": [ 00:27:28.772 { 00:27:28.772 "name": "BaseBdev1", 00:27:28.772 "uuid": "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58", 00:27:28.772 "is_configured": true, 00:27:28.772 "data_offset": 0, 00:27:28.772 "data_size": 65536 00:27:28.772 }, 00:27:28.772 { 00:27:28.772 "name": "BaseBdev2", 00:27:28.772 "uuid": "7733e2cf-ee26-4623-bd24-b4df649ad547", 00:27:28.772 "is_configured": true, 00:27:28.772 "data_offset": 0, 00:27:28.772 "data_size": 65536 00:27:28.772 }, 00:27:28.772 { 00:27:28.772 "name": "BaseBdev3", 00:27:28.772 "uuid": "ac867570-1d24-490d-81af-5a9fbba9f5a4", 00:27:28.772 "is_configured": true, 00:27:28.772 "data_offset": 0, 00:27:28.772 "data_size": 65536 00:27:28.772 }, 00:27:28.772 { 00:27:28.772 "name": "BaseBdev4", 00:27:28.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.772 "is_configured": false, 00:27:28.772 "data_offset": 0, 00:27:28.772 "data_size": 0 00:27:28.772 } 00:27:28.772 ] 00:27:28.772 }' 00:27:28.772 12:11:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:28.772 12:11:34 -- common/autotest_common.sh@10 -- # set +x 00:27:29.706 12:11:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:29.706 [2024-11-29 12:11:35.150327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:29.706 [2024-11-29 12:11:35.150880] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:27:29.706 [2024-11-29 12:11:35.151104] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:29.706 [2024-11-29 12:11:35.151494] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:27:29.706 [2024-11-29 12:11:35.152585] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:27:29.706 [2024-11-29 12:11:35.152807] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:27:29.706 [2024-11-29 12:11:35.153281] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.706 BaseBdev4 00:27:29.706 12:11:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:27:29.706 12:11:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:29.706 12:11:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:29.706 12:11:35 -- common/autotest_common.sh@899 -- # local i 00:27:29.706 12:11:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:29.706 12:11:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:29.706 12:11:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:29.964 12:11:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:30.222 [ 00:27:30.222 { 00:27:30.222 "name": "BaseBdev4", 00:27:30.222 "aliases": [ 00:27:30.222 "aeef0b90-6c18-4572-9033-3670f2400a11" 00:27:30.222 ], 00:27:30.222 "product_name": "Malloc disk", 00:27:30.222 "block_size": 512, 00:27:30.222 "num_blocks": 65536, 00:27:30.222 "uuid": "aeef0b90-6c18-4572-9033-3670f2400a11", 00:27:30.222 "assigned_rate_limits": { 00:27:30.222 "rw_ios_per_sec": 0, 00:27:30.222 "rw_mbytes_per_sec": 0, 00:27:30.222 "r_mbytes_per_sec": 0, 00:27:30.222 "w_mbytes_per_sec": 0 00:27:30.222 }, 00:27:30.222 "claimed": true, 00:27:30.222 "claim_type": "exclusive_write", 00:27:30.222 "zoned": false, 00:27:30.222 "supported_io_types": { 00:27:30.222 "read": true, 00:27:30.222 "write": true, 00:27:30.222 "unmap": true, 00:27:30.222 "write_zeroes": true, 00:27:30.222 "flush": true, 00:27:30.222 "reset": true, 00:27:30.222 "compare": false, 00:27:30.222 "compare_and_write": false, 00:27:30.222 "abort": true, 00:27:30.222 "nvme_admin": false, 00:27:30.222 "nvme_io": false 00:27:30.222 }, 00:27:30.222 "memory_domains": [ 00:27:30.222 { 00:27:30.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.222 "dma_device_type": 2 00:27:30.222 } 00:27:30.222 ], 00:27:30.222 "driver_specific": {} 00:27:30.222 } 00:27:30.222 ] 00:27:30.222 12:11:35 -- common/autotest_common.sh@905 -- # return 0 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.222 12:11:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.480 12:11:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:30.480 "name": "Existed_Raid", 00:27:30.480 "uuid": "c3606d4d-9f96-4ebd-a053-66fc53362f4f", 00:27:30.480 "strip_size_kb": 64, 00:27:30.480 "state": "online", 00:27:30.480 "raid_level": "raid5f", 00:27:30.480 "superblock": false, 00:27:30.480 "num_base_bdevs": 4, 00:27:30.480 "num_base_bdevs_discovered": 4, 00:27:30.480 "num_base_bdevs_operational": 4, 00:27:30.480 "base_bdevs_list": [ 00:27:30.480 { 00:27:30.480 "name": "BaseBdev1", 00:27:30.480 "uuid": "9ff6ec45-3f2f-4988-b6b4-f6740bf61a58", 00:27:30.480 "is_configured": true, 00:27:30.480 "data_offset": 0, 00:27:30.480 "data_size": 65536 00:27:30.480 }, 00:27:30.480 { 00:27:30.480 "name": "BaseBdev2", 00:27:30.480 "uuid": "7733e2cf-ee26-4623-bd24-b4df649ad547", 00:27:30.480 "is_configured": true, 00:27:30.480 "data_offset": 0, 00:27:30.480 "data_size": 65536 00:27:30.480 }, 00:27:30.480 { 00:27:30.480 "name": "BaseBdev3", 00:27:30.480 "uuid": "ac867570-1d24-490d-81af-5a9fbba9f5a4", 00:27:30.480 "is_configured": true, 00:27:30.480 "data_offset": 0, 00:27:30.480 "data_size": 65536 00:27:30.480 }, 00:27:30.480 { 00:27:30.480 "name": "BaseBdev4", 00:27:30.480 "uuid": "aeef0b90-6c18-4572-9033-3670f2400a11", 00:27:30.480 "is_configured": true, 00:27:30.480 "data_offset": 0, 00:27:30.480 "data_size": 65536 00:27:30.480 } 00:27:30.480 ] 00:27:30.480 }' 00:27:30.480 12:11:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:30.480 12:11:35 -- common/autotest_common.sh@10 -- # set +x 00:27:31.047 12:11:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:31.306 [2024-11-29 12:11:36.764432] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.306 12:11:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:31.565 12:11:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:31.565 "name": "Existed_Raid", 00:27:31.565 "uuid": "c3606d4d-9f96-4ebd-a053-66fc53362f4f", 00:27:31.565 "strip_size_kb": 64, 00:27:31.565 "state": "online", 00:27:31.565 "raid_level": "raid5f", 00:27:31.565 "superblock": false, 00:27:31.565 "num_base_bdevs": 4, 00:27:31.565 "num_base_bdevs_discovered": 3, 00:27:31.565 "num_base_bdevs_operational": 3, 00:27:31.565 "base_bdevs_list": [ 00:27:31.565 { 00:27:31.565 "name": null, 00:27:31.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.565 "is_configured": false, 00:27:31.565 "data_offset": 0, 00:27:31.565 "data_size": 65536 00:27:31.565 }, 00:27:31.565 { 00:27:31.565 "name": "BaseBdev2", 00:27:31.565 "uuid": "7733e2cf-ee26-4623-bd24-b4df649ad547", 00:27:31.565 "is_configured": true, 00:27:31.565 "data_offset": 0, 00:27:31.565 "data_size": 65536 00:27:31.565 }, 00:27:31.565 { 00:27:31.565 "name": "BaseBdev3", 00:27:31.565 "uuid": "ac867570-1d24-490d-81af-5a9fbba9f5a4", 00:27:31.565 "is_configured": true, 00:27:31.565 "data_offset": 0, 00:27:31.565 "data_size": 65536 00:27:31.565 }, 00:27:31.565 { 00:27:31.565 "name": "BaseBdev4", 00:27:31.565 "uuid": "aeef0b90-6c18-4572-9033-3670f2400a11", 00:27:31.565 "is_configured": true, 00:27:31.565 "data_offset": 0, 00:27:31.565 "data_size": 65536 00:27:31.565 } 00:27:31.565 ] 00:27:31.565 }' 00:27:31.565 12:11:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:31.565 12:11:37 -- common/autotest_common.sh@10 -- # set +x 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:32.500 12:11:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:32.759 [2024-11-29 12:11:38.246504] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:32.759 [2024-11-29 12:11:38.247638] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:32.759 [2024-11-29 12:11:38.248420] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.017 12:11:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:33.017 12:11:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:33.017 12:11:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.017 12:11:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:33.333 12:11:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:33.333 12:11:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:33.333 12:11:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:33.333 [2024-11-29 12:11:38.820709] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:33.607 12:11:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:33.607 12:11:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:33.607 12:11:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.607 12:11:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:33.866 12:11:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:33.866 12:11:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:33.866 12:11:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:34.124 [2024-11-29 12:11:39.413535] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:34.124 [2024-11-29 12:11:39.414062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:27:34.124 12:11:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:34.124 12:11:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:34.124 12:11:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.124 12:11:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:34.383 12:11:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:34.383 12:11:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:34.383 12:11:39 -- bdev/bdev_raid.sh@287 -- # killprocess 140922 00:27:34.383 12:11:39 -- common/autotest_common.sh@936 -- # '[' -z 140922 ']' 00:27:34.383 12:11:39 -- common/autotest_common.sh@940 -- # kill -0 140922 00:27:34.383 12:11:39 -- common/autotest_common.sh@941 -- # uname 00:27:34.383 12:11:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:34.383 12:11:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140922 00:27:34.383 killing process with pid 140922 00:27:34.383 12:11:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:34.383 12:11:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:34.383 12:11:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140922' 00:27:34.383 12:11:39 -- common/autotest_common.sh@955 -- # kill 140922 00:27:34.383 12:11:39 -- common/autotest_common.sh@960 -- # wait 140922 00:27:34.383 [2024-11-29 12:11:39.758405] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:34.383 [2024-11-29 12:11:39.758518] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:34.641 ************************************ 00:27:34.641 END TEST raid5f_state_function_test 00:27:34.641 ************************************ 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:34.641 00:27:34.641 real 0m14.291s 00:27:34.641 user 0m26.388s 00:27:34.641 sys 0m1.757s 00:27:34.641 12:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:34.641 12:11:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:27:34.641 12:11:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:34.641 12:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:34.641 12:11:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 ************************************ 00:27:34.641 START TEST raid5f_state_function_test_sb 00:27:34.641 ************************************ 00:27:34.641 12:11:40 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:34.641 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=141361 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141361' 00:27:34.898 Process raid pid: 141361 00:27:34.898 12:11:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141361 /var/tmp/spdk-raid.sock 00:27:34.898 12:11:40 -- common/autotest_common.sh@829 -- # '[' -z 141361 ']' 00:27:34.898 12:11:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:34.898 12:11:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.898 12:11:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:34.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:34.899 12:11:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.899 12:11:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.899 [2024-11-29 12:11:40.215527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:34.899 [2024-11-29 12:11:40.215980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.899 [2024-11-29 12:11:40.361781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.157 [2024-11-29 12:11:40.462079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.157 [2024-11-29 12:11:40.519409] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.722 12:11:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.722 12:11:41 -- common/autotest_common.sh@862 -- # return 0 00:27:35.722 12:11:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:35.981 [2024-11-29 12:11:41.414664] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:35.981 [2024-11-29 12:11:41.415031] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:35.981 [2024-11-29 12:11:41.415201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:35.981 [2024-11-29 12:11:41.415271] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:35.981 [2024-11-29 12:11:41.415491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:35.981 [2024-11-29 12:11:41.415591] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:35.981 [2024-11-29 12:11:41.415748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:35.981 [2024-11-29 12:11:41.415900] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.981 12:11:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.240 12:11:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:36.240 "name": "Existed_Raid", 00:27:36.240 "uuid": "ff689fdb-7030-4560-85e1-2e1fa0f1dadb", 00:27:36.240 "strip_size_kb": 64, 00:27:36.240 "state": "configuring", 00:27:36.240 "raid_level": "raid5f", 00:27:36.240 "superblock": true, 00:27:36.240 "num_base_bdevs": 4, 00:27:36.240 "num_base_bdevs_discovered": 0, 00:27:36.240 "num_base_bdevs_operational": 4, 00:27:36.240 "base_bdevs_list": [ 00:27:36.240 { 00:27:36.240 "name": "BaseBdev1", 00:27:36.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.240 "is_configured": false, 00:27:36.240 "data_offset": 0, 00:27:36.240 "data_size": 0 00:27:36.240 }, 00:27:36.240 { 00:27:36.240 "name": "BaseBdev2", 00:27:36.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.240 "is_configured": false, 00:27:36.240 "data_offset": 0, 00:27:36.240 "data_size": 0 00:27:36.240 }, 00:27:36.240 { 00:27:36.240 "name": "BaseBdev3", 00:27:36.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.240 "is_configured": false, 00:27:36.240 "data_offset": 0, 00:27:36.240 "data_size": 0 00:27:36.240 }, 00:27:36.240 { 00:27:36.240 "name": "BaseBdev4", 00:27:36.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.240 "is_configured": false, 00:27:36.240 "data_offset": 0, 00:27:36.240 "data_size": 0 00:27:36.240 } 00:27:36.240 ] 00:27:36.240 }' 00:27:36.240 12:11:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:36.240 12:11:41 -- common/autotest_common.sh@10 -- # set +x 00:27:36.806 12:11:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:37.065 [2024-11-29 12:11:42.542807] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:37.065 [2024-11-29 12:11:42.543099] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:27:37.065 12:11:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:37.323 [2024-11-29 12:11:42.770935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:37.323 [2024-11-29 12:11:42.771291] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:37.323 [2024-11-29 12:11:42.771414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:37.324 [2024-11-29 12:11:42.771491] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:37.324 [2024-11-29 12:11:42.771608] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:37.324 [2024-11-29 12:11:42.771670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:37.324 [2024-11-29 12:11:42.771704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:37.324 [2024-11-29 12:11:42.771842] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:37.324 12:11:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:37.582 [2024-11-29 12:11:43.046466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:37.582 BaseBdev1 00:27:37.582 12:11:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:37.582 12:11:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:37.582 12:11:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:37.582 12:11:43 -- common/autotest_common.sh@899 -- # local i 00:27:37.582 12:11:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:37.582 12:11:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:37.582 12:11:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:37.840 12:11:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:38.098 [ 00:27:38.098 { 00:27:38.098 "name": "BaseBdev1", 00:27:38.098 "aliases": [ 00:27:38.098 "69c107ed-632f-4643-9c62-254e2a613364" 00:27:38.098 ], 00:27:38.098 "product_name": "Malloc disk", 00:27:38.098 "block_size": 512, 00:27:38.098 "num_blocks": 65536, 00:27:38.098 "uuid": "69c107ed-632f-4643-9c62-254e2a613364", 00:27:38.098 "assigned_rate_limits": { 00:27:38.098 "rw_ios_per_sec": 0, 00:27:38.098 "rw_mbytes_per_sec": 0, 00:27:38.098 "r_mbytes_per_sec": 0, 00:27:38.098 "w_mbytes_per_sec": 0 00:27:38.098 }, 00:27:38.098 "claimed": true, 00:27:38.098 "claim_type": "exclusive_write", 00:27:38.098 "zoned": false, 00:27:38.098 "supported_io_types": { 00:27:38.098 "read": true, 00:27:38.098 "write": true, 00:27:38.098 "unmap": true, 00:27:38.098 "write_zeroes": true, 00:27:38.098 "flush": true, 00:27:38.098 "reset": true, 00:27:38.098 "compare": false, 00:27:38.098 "compare_and_write": false, 00:27:38.098 "abort": true, 00:27:38.098 "nvme_admin": false, 00:27:38.098 "nvme_io": false 00:27:38.098 }, 00:27:38.098 "memory_domains": [ 00:27:38.098 { 00:27:38.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.098 "dma_device_type": 2 00:27:38.098 } 00:27:38.098 ], 00:27:38.098 "driver_specific": {} 00:27:38.098 } 00:27:38.098 ] 00:27:38.098 12:11:43 -- common/autotest_common.sh@905 -- # return 0 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.098 12:11:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.665 12:11:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:38.665 "name": "Existed_Raid", 00:27:38.665 "uuid": "4dd242f7-8eda-436c-b640-289de5492a69", 00:27:38.665 "strip_size_kb": 64, 00:27:38.665 "state": "configuring", 00:27:38.665 "raid_level": "raid5f", 00:27:38.665 "superblock": true, 00:27:38.665 "num_base_bdevs": 4, 00:27:38.665 "num_base_bdevs_discovered": 1, 00:27:38.665 "num_base_bdevs_operational": 4, 00:27:38.665 "base_bdevs_list": [ 00:27:38.665 { 00:27:38.665 "name": "BaseBdev1", 00:27:38.665 "uuid": "69c107ed-632f-4643-9c62-254e2a613364", 00:27:38.665 "is_configured": true, 00:27:38.665 "data_offset": 2048, 00:27:38.665 "data_size": 63488 00:27:38.665 }, 00:27:38.665 { 00:27:38.665 "name": "BaseBdev2", 00:27:38.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.665 "is_configured": false, 00:27:38.665 "data_offset": 0, 00:27:38.665 "data_size": 0 00:27:38.665 }, 00:27:38.665 { 00:27:38.665 "name": "BaseBdev3", 00:27:38.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.665 "is_configured": false, 00:27:38.665 "data_offset": 0, 00:27:38.665 "data_size": 0 00:27:38.665 }, 00:27:38.665 { 00:27:38.665 "name": "BaseBdev4", 00:27:38.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.665 "is_configured": false, 00:27:38.665 "data_offset": 0, 00:27:38.665 "data_size": 0 00:27:38.665 } 00:27:38.665 ] 00:27:38.665 }' 00:27:38.665 12:11:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:38.665 12:11:43 -- common/autotest_common.sh@10 -- # set +x 00:27:39.280 12:11:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:39.280 [2024-11-29 12:11:44.758909] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:39.280 [2024-11-29 12:11:44.759274] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:27:39.280 12:11:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:27:39.280 12:11:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:39.539 12:11:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:39.796 BaseBdev1 00:27:40.054 12:11:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:27:40.054 12:11:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:40.054 12:11:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:40.054 12:11:45 -- common/autotest_common.sh@899 -- # local i 00:27:40.054 12:11:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:40.054 12:11:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:40.054 12:11:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:40.312 12:11:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:40.570 [ 00:27:40.570 { 00:27:40.570 "name": "BaseBdev1", 00:27:40.570 "aliases": [ 00:27:40.570 "21f55082-a867-4397-bced-bf9a289941ab" 00:27:40.570 ], 00:27:40.570 "product_name": "Malloc disk", 00:27:40.570 "block_size": 512, 00:27:40.570 "num_blocks": 65536, 00:27:40.570 "uuid": "21f55082-a867-4397-bced-bf9a289941ab", 00:27:40.570 "assigned_rate_limits": { 00:27:40.570 "rw_ios_per_sec": 0, 00:27:40.570 "rw_mbytes_per_sec": 0, 00:27:40.570 "r_mbytes_per_sec": 0, 00:27:40.570 "w_mbytes_per_sec": 0 00:27:40.570 }, 00:27:40.570 "claimed": false, 00:27:40.570 "zoned": false, 00:27:40.570 "supported_io_types": { 00:27:40.570 "read": true, 00:27:40.570 "write": true, 00:27:40.570 "unmap": true, 00:27:40.570 "write_zeroes": true, 00:27:40.570 "flush": true, 00:27:40.570 "reset": true, 00:27:40.570 "compare": false, 00:27:40.570 "compare_and_write": false, 00:27:40.570 "abort": true, 00:27:40.570 "nvme_admin": false, 00:27:40.570 "nvme_io": false 00:27:40.570 }, 00:27:40.570 "memory_domains": [ 00:27:40.570 { 00:27:40.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.570 "dma_device_type": 2 00:27:40.570 } 00:27:40.570 ], 00:27:40.570 "driver_specific": {} 00:27:40.570 } 00:27:40.570 ] 00:27:40.570 12:11:45 -- common/autotest_common.sh@905 -- # return 0 00:27:40.571 12:11:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:40.829 [2024-11-29 12:11:46.092446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.830 [2024-11-29 12:11:46.095007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:40.830 [2024-11-29 12:11:46.095220] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:40.830 [2024-11-29 12:11:46.095340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:40.830 [2024-11-29 12:11:46.095483] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:40.830 [2024-11-29 12:11:46.095588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:40.830 [2024-11-29 12:11:46.095650] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.830 12:11:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.088 12:11:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:41.088 "name": "Existed_Raid", 00:27:41.088 "uuid": "d7ea3736-bd24-41dd-9491-6e1ac77249e0", 00:27:41.088 "strip_size_kb": 64, 00:27:41.088 "state": "configuring", 00:27:41.088 "raid_level": "raid5f", 00:27:41.088 "superblock": true, 00:27:41.088 "num_base_bdevs": 4, 00:27:41.088 "num_base_bdevs_discovered": 1, 00:27:41.088 "num_base_bdevs_operational": 4, 00:27:41.088 "base_bdevs_list": [ 00:27:41.088 { 00:27:41.088 "name": "BaseBdev1", 00:27:41.088 "uuid": "21f55082-a867-4397-bced-bf9a289941ab", 00:27:41.088 "is_configured": true, 00:27:41.088 "data_offset": 2048, 00:27:41.088 "data_size": 63488 00:27:41.088 }, 00:27:41.088 { 00:27:41.088 "name": "BaseBdev2", 00:27:41.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.088 "is_configured": false, 00:27:41.088 "data_offset": 0, 00:27:41.088 "data_size": 0 00:27:41.088 }, 00:27:41.088 { 00:27:41.088 "name": "BaseBdev3", 00:27:41.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.088 "is_configured": false, 00:27:41.088 "data_offset": 0, 00:27:41.088 "data_size": 0 00:27:41.088 }, 00:27:41.088 { 00:27:41.088 "name": "BaseBdev4", 00:27:41.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.088 "is_configured": false, 00:27:41.088 "data_offset": 0, 00:27:41.088 "data_size": 0 00:27:41.088 } 00:27:41.088 ] 00:27:41.088 }' 00:27:41.088 12:11:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:41.088 12:11:46 -- common/autotest_common.sh@10 -- # set +x 00:27:41.654 12:11:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:41.913 [2024-11-29 12:11:47.250124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:41.913 BaseBdev2 00:27:41.913 12:11:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:41.913 12:11:47 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:41.913 12:11:47 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:41.913 12:11:47 -- common/autotest_common.sh@899 -- # local i 00:27:41.913 12:11:47 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:41.913 12:11:47 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:41.913 12:11:47 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:42.172 12:11:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:42.430 [ 00:27:42.430 { 00:27:42.430 "name": "BaseBdev2", 00:27:42.430 "aliases": [ 00:27:42.430 "5153078f-32a3-4b7e-88d8-b6fd2d83c301" 00:27:42.430 ], 00:27:42.430 "product_name": "Malloc disk", 00:27:42.430 "block_size": 512, 00:27:42.430 "num_blocks": 65536, 00:27:42.430 "uuid": "5153078f-32a3-4b7e-88d8-b6fd2d83c301", 00:27:42.430 "assigned_rate_limits": { 00:27:42.430 "rw_ios_per_sec": 0, 00:27:42.430 "rw_mbytes_per_sec": 0, 00:27:42.430 "r_mbytes_per_sec": 0, 00:27:42.430 "w_mbytes_per_sec": 0 00:27:42.430 }, 00:27:42.430 "claimed": true, 00:27:42.430 "claim_type": "exclusive_write", 00:27:42.430 "zoned": false, 00:27:42.430 "supported_io_types": { 00:27:42.430 "read": true, 00:27:42.430 "write": true, 00:27:42.430 "unmap": true, 00:27:42.430 "write_zeroes": true, 00:27:42.430 "flush": true, 00:27:42.430 "reset": true, 00:27:42.430 "compare": false, 00:27:42.430 "compare_and_write": false, 00:27:42.430 "abort": true, 00:27:42.430 "nvme_admin": false, 00:27:42.430 "nvme_io": false 00:27:42.430 }, 00:27:42.430 "memory_domains": [ 00:27:42.430 { 00:27:42.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.430 "dma_device_type": 2 00:27:42.430 } 00:27:42.430 ], 00:27:42.430 "driver_specific": {} 00:27:42.430 } 00:27:42.430 ] 00:27:42.430 12:11:47 -- common/autotest_common.sh@905 -- # return 0 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.430 12:11:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.688 12:11:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:42.688 "name": "Existed_Raid", 00:27:42.688 "uuid": "d7ea3736-bd24-41dd-9491-6e1ac77249e0", 00:27:42.688 "strip_size_kb": 64, 00:27:42.688 "state": "configuring", 00:27:42.688 "raid_level": "raid5f", 00:27:42.688 "superblock": true, 00:27:42.688 "num_base_bdevs": 4, 00:27:42.688 "num_base_bdevs_discovered": 2, 00:27:42.688 "num_base_bdevs_operational": 4, 00:27:42.688 "base_bdevs_list": [ 00:27:42.688 { 00:27:42.688 "name": "BaseBdev1", 00:27:42.688 "uuid": "21f55082-a867-4397-bced-bf9a289941ab", 00:27:42.688 "is_configured": true, 00:27:42.688 "data_offset": 2048, 00:27:42.688 "data_size": 63488 00:27:42.688 }, 00:27:42.688 { 00:27:42.688 "name": "BaseBdev2", 00:27:42.688 "uuid": "5153078f-32a3-4b7e-88d8-b6fd2d83c301", 00:27:42.689 "is_configured": true, 00:27:42.689 "data_offset": 2048, 00:27:42.689 "data_size": 63488 00:27:42.689 }, 00:27:42.689 { 00:27:42.689 "name": "BaseBdev3", 00:27:42.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.689 "is_configured": false, 00:27:42.689 "data_offset": 0, 00:27:42.689 "data_size": 0 00:27:42.689 }, 00:27:42.689 { 00:27:42.689 "name": "BaseBdev4", 00:27:42.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.689 "is_configured": false, 00:27:42.689 "data_offset": 0, 00:27:42.689 "data_size": 0 00:27:42.689 } 00:27:42.689 ] 00:27:42.689 }' 00:27:42.689 12:11:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:42.689 12:11:48 -- common/autotest_common.sh@10 -- # set +x 00:27:43.255 12:11:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:43.513 [2024-11-29 12:11:48.947996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:43.513 BaseBdev3 00:27:43.513 12:11:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:43.513 12:11:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:43.513 12:11:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:43.513 12:11:48 -- common/autotest_common.sh@899 -- # local i 00:27:43.513 12:11:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:43.513 12:11:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:43.513 12:11:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:43.772 12:11:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:44.030 [ 00:27:44.030 { 00:27:44.030 "name": "BaseBdev3", 00:27:44.030 "aliases": [ 00:27:44.030 "7e7ae3cb-7eb3-416b-a17b-06e76273cdaa" 00:27:44.030 ], 00:27:44.031 "product_name": "Malloc disk", 00:27:44.031 "block_size": 512, 00:27:44.031 "num_blocks": 65536, 00:27:44.031 "uuid": "7e7ae3cb-7eb3-416b-a17b-06e76273cdaa", 00:27:44.031 "assigned_rate_limits": { 00:27:44.031 "rw_ios_per_sec": 0, 00:27:44.031 "rw_mbytes_per_sec": 0, 00:27:44.031 "r_mbytes_per_sec": 0, 00:27:44.031 "w_mbytes_per_sec": 0 00:27:44.031 }, 00:27:44.031 "claimed": true, 00:27:44.031 "claim_type": "exclusive_write", 00:27:44.031 "zoned": false, 00:27:44.031 "supported_io_types": { 00:27:44.031 "read": true, 00:27:44.031 "write": true, 00:27:44.031 "unmap": true, 00:27:44.031 "write_zeroes": true, 00:27:44.031 "flush": true, 00:27:44.031 "reset": true, 00:27:44.031 "compare": false, 00:27:44.031 "compare_and_write": false, 00:27:44.031 "abort": true, 00:27:44.031 "nvme_admin": false, 00:27:44.031 "nvme_io": false 00:27:44.031 }, 00:27:44.031 "memory_domains": [ 00:27:44.031 { 00:27:44.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.031 "dma_device_type": 2 00:27:44.031 } 00:27:44.031 ], 00:27:44.031 "driver_specific": {} 00:27:44.031 } 00:27:44.031 ] 00:27:44.031 12:11:49 -- common/autotest_common.sh@905 -- # return 0 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.031 12:11:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.289 12:11:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:44.289 "name": "Existed_Raid", 00:27:44.289 "uuid": "d7ea3736-bd24-41dd-9491-6e1ac77249e0", 00:27:44.289 "strip_size_kb": 64, 00:27:44.289 "state": "configuring", 00:27:44.289 "raid_level": "raid5f", 00:27:44.289 "superblock": true, 00:27:44.289 "num_base_bdevs": 4, 00:27:44.289 "num_base_bdevs_discovered": 3, 00:27:44.289 "num_base_bdevs_operational": 4, 00:27:44.289 "base_bdevs_list": [ 00:27:44.289 { 00:27:44.289 "name": "BaseBdev1", 00:27:44.289 "uuid": "21f55082-a867-4397-bced-bf9a289941ab", 00:27:44.289 "is_configured": true, 00:27:44.289 "data_offset": 2048, 00:27:44.289 "data_size": 63488 00:27:44.289 }, 00:27:44.289 { 00:27:44.289 "name": "BaseBdev2", 00:27:44.289 "uuid": "5153078f-32a3-4b7e-88d8-b6fd2d83c301", 00:27:44.289 "is_configured": true, 00:27:44.289 "data_offset": 2048, 00:27:44.289 "data_size": 63488 00:27:44.289 }, 00:27:44.289 { 00:27:44.289 "name": "BaseBdev3", 00:27:44.289 "uuid": "7e7ae3cb-7eb3-416b-a17b-06e76273cdaa", 00:27:44.289 "is_configured": true, 00:27:44.289 "data_offset": 2048, 00:27:44.289 "data_size": 63488 00:27:44.289 }, 00:27:44.289 { 00:27:44.289 "name": "BaseBdev4", 00:27:44.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.289 "is_configured": false, 00:27:44.289 "data_offset": 0, 00:27:44.289 "data_size": 0 00:27:44.289 } 00:27:44.289 ] 00:27:44.289 }' 00:27:44.289 12:11:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:44.289 12:11:49 -- common/autotest_common.sh@10 -- # set +x 00:27:44.895 12:11:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:45.168 [2024-11-29 12:11:50.560058] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:45.168 [2024-11-29 12:11:50.560796] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:27:45.168 [2024-11-29 12:11:50.561033] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:45.168 [2024-11-29 12:11:50.561401] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:27:45.168 BaseBdev4 00:27:45.168 [2024-11-29 12:11:50.562673] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:27:45.168 [2024-11-29 12:11:50.562914] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:27:45.168 [2024-11-29 12:11:50.563449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.168 12:11:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:27:45.168 12:11:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:45.168 12:11:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:45.168 12:11:50 -- common/autotest_common.sh@899 -- # local i 00:27:45.168 12:11:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:45.168 12:11:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:45.168 12:11:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:45.427 12:11:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:45.692 [ 00:27:45.692 { 00:27:45.692 "name": "BaseBdev4", 00:27:45.692 "aliases": [ 00:27:45.692 "2ef93803-9286-4aec-9b43-15ee91642b6c" 00:27:45.692 ], 00:27:45.692 "product_name": "Malloc disk", 00:27:45.692 "block_size": 512, 00:27:45.692 "num_blocks": 65536, 00:27:45.692 "uuid": "2ef93803-9286-4aec-9b43-15ee91642b6c", 00:27:45.692 "assigned_rate_limits": { 00:27:45.692 "rw_ios_per_sec": 0, 00:27:45.692 "rw_mbytes_per_sec": 0, 00:27:45.692 "r_mbytes_per_sec": 0, 00:27:45.692 "w_mbytes_per_sec": 0 00:27:45.692 }, 00:27:45.692 "claimed": true, 00:27:45.692 "claim_type": "exclusive_write", 00:27:45.692 "zoned": false, 00:27:45.692 "supported_io_types": { 00:27:45.692 "read": true, 00:27:45.692 "write": true, 00:27:45.692 "unmap": true, 00:27:45.692 "write_zeroes": true, 00:27:45.692 "flush": true, 00:27:45.692 "reset": true, 00:27:45.692 "compare": false, 00:27:45.692 "compare_and_write": false, 00:27:45.692 "abort": true, 00:27:45.692 "nvme_admin": false, 00:27:45.692 "nvme_io": false 00:27:45.692 }, 00:27:45.692 "memory_domains": [ 00:27:45.692 { 00:27:45.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.692 "dma_device_type": 2 00:27:45.692 } 00:27:45.692 ], 00:27:45.692 "driver_specific": {} 00:27:45.692 } 00:27:45.692 ] 00:27:45.692 12:11:51 -- common/autotest_common.sh@905 -- # return 0 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.692 12:11:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.950 12:11:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:45.950 "name": "Existed_Raid", 00:27:45.950 "uuid": "d7ea3736-bd24-41dd-9491-6e1ac77249e0", 00:27:45.950 "strip_size_kb": 64, 00:27:45.950 "state": "online", 00:27:45.950 "raid_level": "raid5f", 00:27:45.950 "superblock": true, 00:27:45.950 "num_base_bdevs": 4, 00:27:45.950 "num_base_bdevs_discovered": 4, 00:27:45.950 "num_base_bdevs_operational": 4, 00:27:45.950 "base_bdevs_list": [ 00:27:45.950 { 00:27:45.950 "name": "BaseBdev1", 00:27:45.950 "uuid": "21f55082-a867-4397-bced-bf9a289941ab", 00:27:45.950 "is_configured": true, 00:27:45.950 "data_offset": 2048, 00:27:45.950 "data_size": 63488 00:27:45.950 }, 00:27:45.950 { 00:27:45.950 "name": "BaseBdev2", 00:27:45.950 "uuid": "5153078f-32a3-4b7e-88d8-b6fd2d83c301", 00:27:45.950 "is_configured": true, 00:27:45.950 "data_offset": 2048, 00:27:45.950 "data_size": 63488 00:27:45.950 }, 00:27:45.950 { 00:27:45.950 "name": "BaseBdev3", 00:27:45.950 "uuid": "7e7ae3cb-7eb3-416b-a17b-06e76273cdaa", 00:27:45.950 "is_configured": true, 00:27:45.950 "data_offset": 2048, 00:27:45.950 "data_size": 63488 00:27:45.950 }, 00:27:45.950 { 00:27:45.950 "name": "BaseBdev4", 00:27:45.950 "uuid": "2ef93803-9286-4aec-9b43-15ee91642b6c", 00:27:45.950 "is_configured": true, 00:27:45.950 "data_offset": 2048, 00:27:45.950 "data_size": 63488 00:27:45.950 } 00:27:45.950 ] 00:27:45.950 }' 00:27:45.950 12:11:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:45.950 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:27:46.515 12:11:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:46.775 [2024-11-29 12:11:52.250736] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.775 12:11:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.341 12:11:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:47.341 "name": "Existed_Raid", 00:27:47.341 "uuid": "d7ea3736-bd24-41dd-9491-6e1ac77249e0", 00:27:47.341 "strip_size_kb": 64, 00:27:47.341 "state": "online", 00:27:47.341 "raid_level": "raid5f", 00:27:47.341 "superblock": true, 00:27:47.341 "num_base_bdevs": 4, 00:27:47.341 "num_base_bdevs_discovered": 3, 00:27:47.341 "num_base_bdevs_operational": 3, 00:27:47.341 "base_bdevs_list": [ 00:27:47.341 { 00:27:47.341 "name": null, 00:27:47.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.341 "is_configured": false, 00:27:47.341 "data_offset": 2048, 00:27:47.341 "data_size": 63488 00:27:47.341 }, 00:27:47.341 { 00:27:47.341 "name": "BaseBdev2", 00:27:47.341 "uuid": "5153078f-32a3-4b7e-88d8-b6fd2d83c301", 00:27:47.341 "is_configured": true, 00:27:47.341 "data_offset": 2048, 00:27:47.341 "data_size": 63488 00:27:47.341 }, 00:27:47.341 { 00:27:47.341 "name": "BaseBdev3", 00:27:47.341 "uuid": "7e7ae3cb-7eb3-416b-a17b-06e76273cdaa", 00:27:47.341 "is_configured": true, 00:27:47.341 "data_offset": 2048, 00:27:47.341 "data_size": 63488 00:27:47.341 }, 00:27:47.341 { 00:27:47.341 "name": "BaseBdev4", 00:27:47.341 "uuid": "2ef93803-9286-4aec-9b43-15ee91642b6c", 00:27:47.341 "is_configured": true, 00:27:47.341 "data_offset": 2048, 00:27:47.341 "data_size": 63488 00:27:47.341 } 00:27:47.341 ] 00:27:47.341 }' 00:27:47.341 12:11:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:47.341 12:11:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.909 12:11:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:47.909 12:11:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:47.909 12:11:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.909 12:11:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:48.168 12:11:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:48.168 12:11:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:48.168 12:11:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:48.426 [2024-11-29 12:11:53.760325] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:48.426 [2024-11-29 12:11:53.760671] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:48.426 [2024-11-29 12:11:53.760971] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:48.426 12:11:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:48.426 12:11:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:48.426 12:11:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.426 12:11:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:48.684 12:11:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:48.684 12:11:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:48.684 12:11:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:48.943 [2024-11-29 12:11:54.335273] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:48.943 12:11:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:48.943 12:11:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:48.943 12:11:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.943 12:11:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:49.201 12:11:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:49.201 12:11:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:49.201 12:11:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:49.459 [2024-11-29 12:11:54.816605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:49.459 [2024-11-29 12:11:54.816991] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:27:49.459 12:11:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:49.459 12:11:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:49.459 12:11:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.459 12:11:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:49.718 12:11:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:49.718 12:11:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:49.718 12:11:55 -- bdev/bdev_raid.sh@287 -- # killprocess 141361 00:27:49.718 12:11:55 -- common/autotest_common.sh@936 -- # '[' -z 141361 ']' 00:27:49.718 12:11:55 -- common/autotest_common.sh@940 -- # kill -0 141361 00:27:49.718 12:11:55 -- common/autotest_common.sh@941 -- # uname 00:27:49.718 12:11:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:49.718 12:11:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141361 00:27:49.718 12:11:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:49.718 12:11:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:49.718 12:11:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141361' 00:27:49.718 killing process with pid 141361 00:27:49.718 12:11:55 -- common/autotest_common.sh@955 -- # kill 141361 00:27:49.718 12:11:55 -- common/autotest_common.sh@960 -- # wait 141361 00:27:49.718 [2024-11-29 12:11:55.122383] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:49.718 [2024-11-29 12:11:55.122702] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:49.975 00:27:49.975 real 0m15.221s 00:27:49.975 user 0m28.077s 00:27:49.975 sys 0m1.971s 00:27:49.975 12:11:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:49.975 ************************************ 00:27:49.975 END TEST raid5f_state_function_test_sb 00:27:49.975 ************************************ 00:27:49.975 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:27:49.975 12:11:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:27:49.975 12:11:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.975 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:49.975 ************************************ 00:27:49.975 START TEST raid5f_superblock_test 00:27:49.975 ************************************ 00:27:49.975 12:11:55 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@357 -- # raid_pid=141811 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:49.975 12:11:55 -- bdev/bdev_raid.sh@358 -- # waitforlisten 141811 /var/tmp/spdk-raid.sock 00:27:49.975 12:11:55 -- common/autotest_common.sh@829 -- # '[' -z 141811 ']' 00:27:49.975 12:11:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:49.975 12:11:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:49.975 12:11:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:49.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:49.975 12:11:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:49.975 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:50.233 [2024-11-29 12:11:55.490534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:50.233 [2024-11-29 12:11:55.491022] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141811 ] 00:27:50.233 [2024-11-29 12:11:55.639613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.233 [2024-11-29 12:11:55.738984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.491 [2024-11-29 12:11:55.796845] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:51.058 12:11:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.058 12:11:56 -- common/autotest_common.sh@862 -- # return 0 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:51.058 12:11:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:51.316 malloc1 00:27:51.316 12:11:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:51.574 [2024-11-29 12:11:56.959544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:51.574 [2024-11-29 12:11:56.959944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.574 [2024-11-29 12:11:56.960039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:27:51.574 [2024-11-29 12:11:56.960355] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.574 [2024-11-29 12:11:56.963178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.574 [2024-11-29 12:11:56.963369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:51.574 pt1 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:51.574 12:11:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:51.832 malloc2 00:27:51.832 12:11:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:52.090 [2024-11-29 12:11:57.491070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:52.090 [2024-11-29 12:11:57.491403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.090 [2024-11-29 12:11:57.491494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:52.090 [2024-11-29 12:11:57.491703] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.090 [2024-11-29 12:11:57.494438] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.090 [2024-11-29 12:11:57.494618] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:52.090 pt2 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:52.090 12:11:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:52.091 12:11:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:52.091 12:11:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:52.349 malloc3 00:27:52.349 12:11:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:52.607 [2024-11-29 12:11:58.041809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:52.607 [2024-11-29 12:11:58.042174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.607 [2024-11-29 12:11:58.042268] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:52.607 [2024-11-29 12:11:58.042522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.607 [2024-11-29 12:11:58.045134] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.607 [2024-11-29 12:11:58.045313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:52.607 pt3 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:52.607 12:11:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:27:52.865 malloc4 00:27:52.865 12:11:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:53.123 [2024-11-29 12:11:58.593647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:53.123 [2024-11-29 12:11:58.594009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.123 [2024-11-29 12:11:58.594095] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:53.123 [2024-11-29 12:11:58.594424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.123 [2024-11-29 12:11:58.597077] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.123 [2024-11-29 12:11:58.597284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:53.123 pt4 00:27:53.123 12:11:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:53.123 12:11:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:53.123 12:11:58 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:27:53.381 [2024-11-29 12:11:58.881891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:53.381 [2024-11-29 12:11:58.884517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:53.381 [2024-11-29 12:11:58.884778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:53.381 [2024-11-29 12:11:58.884882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:53.381 [2024-11-29 12:11:58.885392] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:27:53.381 [2024-11-29 12:11:58.885536] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:53.381 [2024-11-29 12:11:58.885769] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:27:53.381 [2024-11-29 12:11:58.886844] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:27:53.381 [2024-11-29 12:11:58.886986] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:27:53.381 [2024-11-29 12:11:58.887329] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.640 12:11:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.898 12:11:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:53.898 "name": "raid_bdev1", 00:27:53.898 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:27:53.898 "strip_size_kb": 64, 00:27:53.898 "state": "online", 00:27:53.898 "raid_level": "raid5f", 00:27:53.898 "superblock": true, 00:27:53.898 "num_base_bdevs": 4, 00:27:53.898 "num_base_bdevs_discovered": 4, 00:27:53.898 "num_base_bdevs_operational": 4, 00:27:53.898 "base_bdevs_list": [ 00:27:53.898 { 00:27:53.898 "name": "pt1", 00:27:53.898 "uuid": "30d6843d-6ba2-5ab3-9103-e06766fb1a16", 00:27:53.898 "is_configured": true, 00:27:53.898 "data_offset": 2048, 00:27:53.898 "data_size": 63488 00:27:53.898 }, 00:27:53.898 { 00:27:53.898 "name": "pt2", 00:27:53.898 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:27:53.898 "is_configured": true, 00:27:53.898 "data_offset": 2048, 00:27:53.898 "data_size": 63488 00:27:53.898 }, 00:27:53.898 { 00:27:53.898 "name": "pt3", 00:27:53.898 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:27:53.898 "is_configured": true, 00:27:53.898 "data_offset": 2048, 00:27:53.898 "data_size": 63488 00:27:53.898 }, 00:27:53.898 { 00:27:53.898 "name": "pt4", 00:27:53.898 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:27:53.898 "is_configured": true, 00:27:53.898 "data_offset": 2048, 00:27:53.898 "data_size": 63488 00:27:53.898 } 00:27:53.898 ] 00:27:53.898 }' 00:27:53.898 12:11:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:53.898 12:11:59 -- common/autotest_common.sh@10 -- # set +x 00:27:54.465 12:11:59 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:54.465 12:11:59 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:27:54.723 [2024-11-29 12:12:00.035739] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:54.723 12:12:00 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bbebf3d2-1386-4c27-997c-90617e312f74 00:27:54.723 12:12:00 -- bdev/bdev_raid.sh@380 -- # '[' -z bbebf3d2-1386-4c27-997c-90617e312f74 ']' 00:27:54.723 12:12:00 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:54.981 [2024-11-29 12:12:00.307626] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:54.981 [2024-11-29 12:12:00.307931] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.981 [2024-11-29 12:12:00.308177] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.981 [2024-11-29 12:12:00.308419] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:54.982 [2024-11-29 12:12:00.308545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:27:54.982 12:12:00 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.982 12:12:00 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:27:55.239 12:12:00 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:27:55.239 12:12:00 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:27:55.239 12:12:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.239 12:12:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:55.495 12:12:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.495 12:12:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:55.751 12:12:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.751 12:12:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:56.010 12:12:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:56.010 12:12:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:56.268 12:12:01 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:56.268 12:12:01 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:56.526 12:12:01 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:27:56.526 12:12:01 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:56.526 12:12:01 -- common/autotest_common.sh@650 -- # local es=0 00:27:56.526 12:12:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:56.526 12:12:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:56.526 12:12:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.526 12:12:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:56.526 12:12:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.526 12:12:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:56.526 12:12:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.526 12:12:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:56.526 12:12:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:56.526 12:12:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:56.786 [2024-11-29 12:12:02.131955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:56.786 [2024-11-29 12:12:02.134498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:56.786 [2024-11-29 12:12:02.134712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:56.786 [2024-11-29 12:12:02.134797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:56.786 [2024-11-29 12:12:02.134982] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:27:56.786 [2024-11-29 12:12:02.135194] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:27:56.786 [2024-11-29 12:12:02.135355] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:27:56.786 [2024-11-29 12:12:02.135458] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:27:56.786 [2024-11-29 12:12:02.135695] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:56.786 [2024-11-29 12:12:02.135742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:27:56.786 request: 00:27:56.786 { 00:27:56.786 "name": "raid_bdev1", 00:27:56.786 "raid_level": "raid5f", 00:27:56.786 "base_bdevs": [ 00:27:56.786 "malloc1", 00:27:56.786 "malloc2", 00:27:56.786 "malloc3", 00:27:56.786 "malloc4" 00:27:56.786 ], 00:27:56.786 "superblock": false, 00:27:56.786 "strip_size_kb": 64, 00:27:56.786 "method": "bdev_raid_create", 00:27:56.786 "req_id": 1 00:27:56.786 } 00:27:56.786 Got JSON-RPC error response 00:27:56.786 response: 00:27:56.786 { 00:27:56.786 "code": -17, 00:27:56.786 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:56.786 } 00:27:56.786 12:12:02 -- common/autotest_common.sh@653 -- # es=1 00:27:56.786 12:12:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.786 12:12:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.786 12:12:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.786 12:12:02 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.786 12:12:02 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:27:57.046 12:12:02 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:27:57.046 12:12:02 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:27:57.046 12:12:02 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:57.305 [2024-11-29 12:12:02.652200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:57.305 [2024-11-29 12:12:02.652584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.305 [2024-11-29 12:12:02.652788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:57.305 [2024-11-29 12:12:02.652924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.305 [2024-11-29 12:12:02.655618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.305 [2024-11-29 12:12:02.655823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:57.305 [2024-11-29 12:12:02.656052] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:27:57.305 [2024-11-29 12:12:02.656241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:57.305 pt1 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.305 12:12:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.564 12:12:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:57.564 "name": "raid_bdev1", 00:27:57.564 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:27:57.564 "strip_size_kb": 64, 00:27:57.564 "state": "configuring", 00:27:57.564 "raid_level": "raid5f", 00:27:57.564 "superblock": true, 00:27:57.564 "num_base_bdevs": 4, 00:27:57.564 "num_base_bdevs_discovered": 1, 00:27:57.564 "num_base_bdevs_operational": 4, 00:27:57.564 "base_bdevs_list": [ 00:27:57.564 { 00:27:57.564 "name": "pt1", 00:27:57.564 "uuid": "30d6843d-6ba2-5ab3-9103-e06766fb1a16", 00:27:57.564 "is_configured": true, 00:27:57.564 "data_offset": 2048, 00:27:57.564 "data_size": 63488 00:27:57.564 }, 00:27:57.564 { 00:27:57.564 "name": null, 00:27:57.564 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:27:57.564 "is_configured": false, 00:27:57.564 "data_offset": 2048, 00:27:57.564 "data_size": 63488 00:27:57.564 }, 00:27:57.564 { 00:27:57.564 "name": null, 00:27:57.564 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:27:57.564 "is_configured": false, 00:27:57.564 "data_offset": 2048, 00:27:57.564 "data_size": 63488 00:27:57.564 }, 00:27:57.564 { 00:27:57.564 "name": null, 00:27:57.564 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:27:57.564 "is_configured": false, 00:27:57.564 "data_offset": 2048, 00:27:57.564 "data_size": 63488 00:27:57.564 } 00:27:57.564 ] 00:27:57.564 }' 00:27:57.564 12:12:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:57.564 12:12:02 -- common/autotest_common.sh@10 -- # set +x 00:27:58.131 12:12:03 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:27:58.131 12:12:03 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:58.389 [2024-11-29 12:12:03.880867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:58.389 [2024-11-29 12:12:03.881262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.389 [2024-11-29 12:12:03.881355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:58.389 [2024-11-29 12:12:03.881564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.389 [2024-11-29 12:12:03.882092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.389 [2024-11-29 12:12:03.882262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:58.389 [2024-11-29 12:12:03.882509] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:58.389 [2024-11-29 12:12:03.882639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:58.389 pt2 00:27:58.389 12:12:03 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:58.648 [2024-11-29 12:12:04.144952] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:58.908 "name": "raid_bdev1", 00:27:58.908 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:27:58.908 "strip_size_kb": 64, 00:27:58.908 "state": "configuring", 00:27:58.908 "raid_level": "raid5f", 00:27:58.908 "superblock": true, 00:27:58.908 "num_base_bdevs": 4, 00:27:58.908 "num_base_bdevs_discovered": 1, 00:27:58.908 "num_base_bdevs_operational": 4, 00:27:58.908 "base_bdevs_list": [ 00:27:58.908 { 00:27:58.908 "name": "pt1", 00:27:58.908 "uuid": "30d6843d-6ba2-5ab3-9103-e06766fb1a16", 00:27:58.908 "is_configured": true, 00:27:58.908 "data_offset": 2048, 00:27:58.908 "data_size": 63488 00:27:58.908 }, 00:27:58.908 { 00:27:58.908 "name": null, 00:27:58.908 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:27:58.908 "is_configured": false, 00:27:58.908 "data_offset": 2048, 00:27:58.908 "data_size": 63488 00:27:58.908 }, 00:27:58.908 { 00:27:58.908 "name": null, 00:27:58.908 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:27:58.908 "is_configured": false, 00:27:58.908 "data_offset": 2048, 00:27:58.908 "data_size": 63488 00:27:58.908 }, 00:27:58.908 { 00:27:58.908 "name": null, 00:27:58.908 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:27:58.908 "is_configured": false, 00:27:58.908 "data_offset": 2048, 00:27:58.908 "data_size": 63488 00:27:58.908 } 00:27:58.908 ] 00:27:58.908 }' 00:27:58.908 12:12:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:58.908 12:12:04 -- common/autotest_common.sh@10 -- # set +x 00:27:59.840 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:27:59.840 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:59.840 12:12:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:59.840 [2024-11-29 12:12:05.273165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:59.840 [2024-11-29 12:12:05.273534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:59.840 [2024-11-29 12:12:05.273624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:59.840 [2024-11-29 12:12:05.273836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:59.840 [2024-11-29 12:12:05.274384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:59.840 [2024-11-29 12:12:05.274571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:59.840 [2024-11-29 12:12:05.274810] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:59.840 [2024-11-29 12:12:05.274952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:59.840 pt2 00:27:59.840 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:27:59.840 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:59.840 12:12:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:00.099 [2024-11-29 12:12:05.545263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:00.099 [2024-11-29 12:12:05.545653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.099 [2024-11-29 12:12:05.545742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:00.099 [2024-11-29 12:12:05.546013] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.099 [2024-11-29 12:12:05.546569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.099 [2024-11-29 12:12:05.546757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:00.099 [2024-11-29 12:12:05.546962] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:00.099 [2024-11-29 12:12:05.547091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:00.099 pt3 00:28:00.099 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:00.099 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:00.099 12:12:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:00.359 [2024-11-29 12:12:05.777312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:00.359 [2024-11-29 12:12:05.777707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.359 [2024-11-29 12:12:05.777791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:00.359 [2024-11-29 12:12:05.778026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.359 [2024-11-29 12:12:05.778594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.359 [2024-11-29 12:12:05.778786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:00.359 [2024-11-29 12:12:05.778989] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:00.359 [2024-11-29 12:12:05.779145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:00.359 [2024-11-29 12:12:05.779372] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:00.359 [2024-11-29 12:12:05.779499] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:00.359 [2024-11-29 12:12:05.779695] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:28:00.359 [2024-11-29 12:12:05.780587] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:00.360 [2024-11-29 12:12:05.780721] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:00.360 [2024-11-29 12:12:05.780943] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.360 pt4 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.360 12:12:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.618 12:12:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:00.618 "name": "raid_bdev1", 00:28:00.618 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:00.618 "strip_size_kb": 64, 00:28:00.618 "state": "online", 00:28:00.618 "raid_level": "raid5f", 00:28:00.618 "superblock": true, 00:28:00.618 "num_base_bdevs": 4, 00:28:00.618 "num_base_bdevs_discovered": 4, 00:28:00.618 "num_base_bdevs_operational": 4, 00:28:00.618 "base_bdevs_list": [ 00:28:00.618 { 00:28:00.618 "name": "pt1", 00:28:00.618 "uuid": "30d6843d-6ba2-5ab3-9103-e06766fb1a16", 00:28:00.618 "is_configured": true, 00:28:00.618 "data_offset": 2048, 00:28:00.618 "data_size": 63488 00:28:00.618 }, 00:28:00.618 { 00:28:00.618 "name": "pt2", 00:28:00.618 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:00.618 "is_configured": true, 00:28:00.618 "data_offset": 2048, 00:28:00.618 "data_size": 63488 00:28:00.618 }, 00:28:00.618 { 00:28:00.618 "name": "pt3", 00:28:00.618 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:00.618 "is_configured": true, 00:28:00.618 "data_offset": 2048, 00:28:00.618 "data_size": 63488 00:28:00.618 }, 00:28:00.619 { 00:28:00.619 "name": "pt4", 00:28:00.619 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:00.619 "is_configured": true, 00:28:00.619 "data_offset": 2048, 00:28:00.619 "data_size": 63488 00:28:00.619 } 00:28:00.619 ] 00:28:00.619 }' 00:28:00.619 12:12:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:00.619 12:12:06 -- common/autotest_common.sh@10 -- # set +x 00:28:01.553 12:12:06 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:01.553 12:12:06 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:28:01.553 [2024-11-29 12:12:06.991424] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:01.553 12:12:07 -- bdev/bdev_raid.sh@430 -- # '[' bbebf3d2-1386-4c27-997c-90617e312f74 '!=' bbebf3d2-1386-4c27-997c-90617e312f74 ']' 00:28:01.553 12:12:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:28:01.553 12:12:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:01.553 12:12:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:28:01.553 12:12:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:01.812 [2024-11-29 12:12:07.239380] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.812 12:12:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.070 12:12:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:02.071 "name": "raid_bdev1", 00:28:02.071 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:02.071 "strip_size_kb": 64, 00:28:02.071 "state": "online", 00:28:02.071 "raid_level": "raid5f", 00:28:02.071 "superblock": true, 00:28:02.071 "num_base_bdevs": 4, 00:28:02.071 "num_base_bdevs_discovered": 3, 00:28:02.071 "num_base_bdevs_operational": 3, 00:28:02.071 "base_bdevs_list": [ 00:28:02.071 { 00:28:02.071 "name": null, 00:28:02.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.071 "is_configured": false, 00:28:02.071 "data_offset": 2048, 00:28:02.071 "data_size": 63488 00:28:02.071 }, 00:28:02.071 { 00:28:02.071 "name": "pt2", 00:28:02.071 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:02.071 "is_configured": true, 00:28:02.071 "data_offset": 2048, 00:28:02.071 "data_size": 63488 00:28:02.071 }, 00:28:02.071 { 00:28:02.071 "name": "pt3", 00:28:02.071 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:02.071 "is_configured": true, 00:28:02.071 "data_offset": 2048, 00:28:02.071 "data_size": 63488 00:28:02.071 }, 00:28:02.071 { 00:28:02.071 "name": "pt4", 00:28:02.071 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:02.071 "is_configured": true, 00:28:02.071 "data_offset": 2048, 00:28:02.071 "data_size": 63488 00:28:02.071 } 00:28:02.071 ] 00:28:02.071 }' 00:28:02.071 12:12:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:02.071 12:12:07 -- common/autotest_common.sh@10 -- # set +x 00:28:02.750 12:12:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:03.008 [2024-11-29 12:12:08.463596] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:03.008 [2024-11-29 12:12:08.463813] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:03.008 [2024-11-29 12:12:08.464003] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:03.008 [2024-11-29 12:12:08.464226] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:03.008 [2024-11-29 12:12:08.464361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:03.008 12:12:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.008 12:12:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:28:03.266 12:12:08 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:28:03.266 12:12:08 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:28:03.266 12:12:08 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:28:03.266 12:12:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:03.266 12:12:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:03.524 12:12:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:03.524 12:12:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:03.524 12:12:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:03.782 12:12:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:03.782 12:12:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:03.782 12:12:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:04.040 12:12:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:04.040 12:12:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:04.040 12:12:09 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:28:04.040 12:12:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:04.040 12:12:09 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:04.297 [2024-11-29 12:12:09.739821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:04.297 [2024-11-29 12:12:09.740159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.297 [2024-11-29 12:12:09.740248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:04.297 [2024-11-29 12:12:09.740506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.297 [2024-11-29 12:12:09.743155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.297 [2024-11-29 12:12:09.743353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:04.297 [2024-11-29 12:12:09.743585] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:04.297 [2024-11-29 12:12:09.743734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:04.297 pt2 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.297 12:12:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.555 12:12:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:04.555 "name": "raid_bdev1", 00:28:04.555 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:04.555 "strip_size_kb": 64, 00:28:04.555 "state": "configuring", 00:28:04.555 "raid_level": "raid5f", 00:28:04.555 "superblock": true, 00:28:04.555 "num_base_bdevs": 4, 00:28:04.555 "num_base_bdevs_discovered": 1, 00:28:04.555 "num_base_bdevs_operational": 3, 00:28:04.555 "base_bdevs_list": [ 00:28:04.555 { 00:28:04.555 "name": null, 00:28:04.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.555 "is_configured": false, 00:28:04.555 "data_offset": 2048, 00:28:04.555 "data_size": 63488 00:28:04.555 }, 00:28:04.555 { 00:28:04.555 "name": "pt2", 00:28:04.555 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:04.555 "is_configured": true, 00:28:04.555 "data_offset": 2048, 00:28:04.555 "data_size": 63488 00:28:04.555 }, 00:28:04.555 { 00:28:04.555 "name": null, 00:28:04.555 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:04.555 "is_configured": false, 00:28:04.555 "data_offset": 2048, 00:28:04.555 "data_size": 63488 00:28:04.555 }, 00:28:04.555 { 00:28:04.555 "name": null, 00:28:04.555 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:04.555 "is_configured": false, 00:28:04.555 "data_offset": 2048, 00:28:04.555 "data_size": 63488 00:28:04.555 } 00:28:04.555 ] 00:28:04.555 }' 00:28:04.555 12:12:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:04.555 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:05.488 [2024-11-29 12:12:10.884335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:05.488 [2024-11-29 12:12:10.884739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:05.488 [2024-11-29 12:12:10.884832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:05.488 [2024-11-29 12:12:10.884971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:05.488 [2024-11-29 12:12:10.885486] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:05.488 [2024-11-29 12:12:10.885657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:05.488 [2024-11-29 12:12:10.885865] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:05.488 [2024-11-29 12:12:10.886005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:05.488 pt3 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.488 12:12:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.747 12:12:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:05.747 "name": "raid_bdev1", 00:28:05.747 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:05.747 "strip_size_kb": 64, 00:28:05.747 "state": "configuring", 00:28:05.747 "raid_level": "raid5f", 00:28:05.747 "superblock": true, 00:28:05.747 "num_base_bdevs": 4, 00:28:05.747 "num_base_bdevs_discovered": 2, 00:28:05.747 "num_base_bdevs_operational": 3, 00:28:05.747 "base_bdevs_list": [ 00:28:05.747 { 00:28:05.747 "name": null, 00:28:05.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.747 "is_configured": false, 00:28:05.747 "data_offset": 2048, 00:28:05.747 "data_size": 63488 00:28:05.747 }, 00:28:05.747 { 00:28:05.747 "name": "pt2", 00:28:05.747 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:05.747 "is_configured": true, 00:28:05.747 "data_offset": 2048, 00:28:05.747 "data_size": 63488 00:28:05.747 }, 00:28:05.747 { 00:28:05.747 "name": "pt3", 00:28:05.747 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:05.747 "is_configured": true, 00:28:05.747 "data_offset": 2048, 00:28:05.747 "data_size": 63488 00:28:05.747 }, 00:28:05.747 { 00:28:05.747 "name": null, 00:28:05.747 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:05.747 "is_configured": false, 00:28:05.747 "data_offset": 2048, 00:28:05.747 "data_size": 63488 00:28:05.747 } 00:28:05.747 ] 00:28:05.747 }' 00:28:05.747 12:12:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:05.747 12:12:11 -- common/autotest_common.sh@10 -- # set +x 00:28:06.313 12:12:11 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:06.313 12:12:11 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:06.313 12:12:11 -- bdev/bdev_raid.sh@462 -- # i=3 00:28:06.313 12:12:11 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:06.572 [2024-11-29 12:12:12.064599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:06.572 [2024-11-29 12:12:12.065001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.572 [2024-11-29 12:12:12.065093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:06.572 [2024-11-29 12:12:12.065340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.572 [2024-11-29 12:12:12.065874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.572 [2024-11-29 12:12:12.066033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:06.572 [2024-11-29 12:12:12.066238] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:06.572 [2024-11-29 12:12:12.066397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:06.572 [2024-11-29 12:12:12.066663] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:28:06.572 [2024-11-29 12:12:12.066790] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:06.572 [2024-11-29 12:12:12.066904] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:28:06.572 [2024-11-29 12:12:12.067821] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:28:06.572 [2024-11-29 12:12:12.067952] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:28:06.572 [2024-11-29 12:12:12.068303] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.572 pt4 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:06.831 "name": "raid_bdev1", 00:28:06.831 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:06.831 "strip_size_kb": 64, 00:28:06.831 "state": "online", 00:28:06.831 "raid_level": "raid5f", 00:28:06.831 "superblock": true, 00:28:06.831 "num_base_bdevs": 4, 00:28:06.831 "num_base_bdevs_discovered": 3, 00:28:06.831 "num_base_bdevs_operational": 3, 00:28:06.831 "base_bdevs_list": [ 00:28:06.831 { 00:28:06.831 "name": null, 00:28:06.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.831 "is_configured": false, 00:28:06.831 "data_offset": 2048, 00:28:06.831 "data_size": 63488 00:28:06.831 }, 00:28:06.831 { 00:28:06.831 "name": "pt2", 00:28:06.831 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:06.831 "is_configured": true, 00:28:06.831 "data_offset": 2048, 00:28:06.831 "data_size": 63488 00:28:06.831 }, 00:28:06.831 { 00:28:06.831 "name": "pt3", 00:28:06.831 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:06.831 "is_configured": true, 00:28:06.831 "data_offset": 2048, 00:28:06.831 "data_size": 63488 00:28:06.831 }, 00:28:06.831 { 00:28:06.831 "name": "pt4", 00:28:06.831 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:06.831 "is_configured": true, 00:28:06.831 "data_offset": 2048, 00:28:06.831 "data_size": 63488 00:28:06.831 } 00:28:06.831 ] 00:28:06.831 }' 00:28:06.831 12:12:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:06.831 12:12:12 -- common/autotest_common.sh@10 -- # set +x 00:28:07.398 12:12:12 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:28:07.398 12:12:12 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:07.656 [2024-11-29 12:12:13.160843] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:07.656 [2024-11-29 12:12:13.161104] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:07.656 [2024-11-29 12:12:13.161299] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:07.656 [2024-11-29 12:12:13.161547] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:07.656 [2024-11-29 12:12:13.161671] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:28:07.914 12:12:13 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.914 12:12:13 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:28:08.172 12:12:13 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:28:08.172 12:12:13 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:28:08.172 12:12:13 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:08.431 [2024-11-29 12:12:13.696954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:08.431 [2024-11-29 12:12:13.697330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.431 [2024-11-29 12:12:13.697427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:08.431 [2024-11-29 12:12:13.697670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.431 [2024-11-29 12:12:13.700308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.431 [2024-11-29 12:12:13.700509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:08.431 [2024-11-29 12:12:13.700724] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:08.431 [2024-11-29 12:12:13.700932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:08.431 pt1 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.431 12:12:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.691 12:12:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:08.691 "name": "raid_bdev1", 00:28:08.691 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:08.691 "strip_size_kb": 64, 00:28:08.691 "state": "configuring", 00:28:08.691 "raid_level": "raid5f", 00:28:08.691 "superblock": true, 00:28:08.691 "num_base_bdevs": 4, 00:28:08.691 "num_base_bdevs_discovered": 1, 00:28:08.691 "num_base_bdevs_operational": 4, 00:28:08.691 "base_bdevs_list": [ 00:28:08.691 { 00:28:08.691 "name": "pt1", 00:28:08.691 "uuid": "30d6843d-6ba2-5ab3-9103-e06766fb1a16", 00:28:08.691 "is_configured": true, 00:28:08.691 "data_offset": 2048, 00:28:08.691 "data_size": 63488 00:28:08.691 }, 00:28:08.691 { 00:28:08.691 "name": null, 00:28:08.691 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:08.691 "is_configured": false, 00:28:08.691 "data_offset": 2048, 00:28:08.691 "data_size": 63488 00:28:08.691 }, 00:28:08.691 { 00:28:08.691 "name": null, 00:28:08.691 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:08.691 "is_configured": false, 00:28:08.691 "data_offset": 2048, 00:28:08.691 "data_size": 63488 00:28:08.691 }, 00:28:08.691 { 00:28:08.691 "name": null, 00:28:08.691 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:08.691 "is_configured": false, 00:28:08.691 "data_offset": 2048, 00:28:08.691 "data_size": 63488 00:28:08.691 } 00:28:08.691 ] 00:28:08.691 }' 00:28:08.691 12:12:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:08.691 12:12:13 -- common/autotest_common.sh@10 -- # set +x 00:28:09.259 12:12:14 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:28:09.259 12:12:14 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:09.259 12:12:14 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:09.517 12:12:14 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:09.517 12:12:14 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:09.517 12:12:14 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:09.775 12:12:15 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:09.775 12:12:15 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:09.775 12:12:15 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:10.033 12:12:15 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:10.033 12:12:15 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:10.033 12:12:15 -- bdev/bdev_raid.sh@489 -- # i=3 00:28:10.033 12:12:15 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:10.033 [2024-11-29 12:12:15.529605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:10.033 [2024-11-29 12:12:15.530000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.033 [2024-11-29 12:12:15.530082] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:10.033 [2024-11-29 12:12:15.530384] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.033 [2024-11-29 12:12:15.530906] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.034 [2024-11-29 12:12:15.531095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:10.034 [2024-11-29 12:12:15.531312] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:10.034 [2024-11-29 12:12:15.531437] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:10.034 [2024-11-29 12:12:15.531541] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:10.034 [2024-11-29 12:12:15.531601] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:28:10.034 [2024-11-29 12:12:15.531762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:10.034 pt4 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.292 12:12:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.565 12:12:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:10.565 "name": "raid_bdev1", 00:28:10.565 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:10.565 "strip_size_kb": 64, 00:28:10.565 "state": "configuring", 00:28:10.565 "raid_level": "raid5f", 00:28:10.565 "superblock": true, 00:28:10.565 "num_base_bdevs": 4, 00:28:10.565 "num_base_bdevs_discovered": 1, 00:28:10.565 "num_base_bdevs_operational": 3, 00:28:10.565 "base_bdevs_list": [ 00:28:10.565 { 00:28:10.565 "name": null, 00:28:10.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.565 "is_configured": false, 00:28:10.565 "data_offset": 2048, 00:28:10.565 "data_size": 63488 00:28:10.565 }, 00:28:10.565 { 00:28:10.565 "name": null, 00:28:10.565 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:10.565 "is_configured": false, 00:28:10.565 "data_offset": 2048, 00:28:10.565 "data_size": 63488 00:28:10.565 }, 00:28:10.565 { 00:28:10.565 "name": null, 00:28:10.565 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:10.565 "is_configured": false, 00:28:10.565 "data_offset": 2048, 00:28:10.565 "data_size": 63488 00:28:10.565 }, 00:28:10.565 { 00:28:10.565 "name": "pt4", 00:28:10.565 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:10.565 "is_configured": true, 00:28:10.565 "data_offset": 2048, 00:28:10.565 "data_size": 63488 00:28:10.565 } 00:28:10.565 ] 00:28:10.565 }' 00:28:10.565 12:12:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:10.565 12:12:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.130 12:12:16 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:28:11.130 12:12:16 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:11.130 12:12:16 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:11.390 [2024-11-29 12:12:16.693875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:11.390 [2024-11-29 12:12:16.694276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.390 [2024-11-29 12:12:16.694398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:11.390 [2024-11-29 12:12:16.694641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.390 [2024-11-29 12:12:16.695177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.390 [2024-11-29 12:12:16.695353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:11.390 [2024-11-29 12:12:16.695618] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:11.390 [2024-11-29 12:12:16.695763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:11.390 pt2 00:28:11.390 12:12:16 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:11.390 12:12:16 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:11.390 12:12:16 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:11.652 [2024-11-29 12:12:16.973951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:11.652 [2024-11-29 12:12:16.974425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.652 [2024-11-29 12:12:16.974605] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:11.652 [2024-11-29 12:12:16.974743] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.652 [2024-11-29 12:12:16.975425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.652 [2024-11-29 12:12:16.975610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:11.652 [2024-11-29 12:12:16.975880] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:11.652 [2024-11-29 12:12:16.976024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:11.652 [2024-11-29 12:12:16.976229] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:28:11.652 [2024-11-29 12:12:16.976343] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:11.652 [2024-11-29 12:12:16.976555] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:28:11.652 [2024-11-29 12:12:16.977563] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:28:11.652 [2024-11-29 12:12:16.977697] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:28:11.652 [2024-11-29 12:12:16.978067] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.652 pt3 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.652 12:12:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.911 12:12:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:11.911 "name": "raid_bdev1", 00:28:11.911 "uuid": "bbebf3d2-1386-4c27-997c-90617e312f74", 00:28:11.911 "strip_size_kb": 64, 00:28:11.911 "state": "online", 00:28:11.911 "raid_level": "raid5f", 00:28:11.911 "superblock": true, 00:28:11.911 "num_base_bdevs": 4, 00:28:11.911 "num_base_bdevs_discovered": 3, 00:28:11.911 "num_base_bdevs_operational": 3, 00:28:11.911 "base_bdevs_list": [ 00:28:11.911 { 00:28:11.911 "name": null, 00:28:11.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.911 "is_configured": false, 00:28:11.911 "data_offset": 2048, 00:28:11.911 "data_size": 63488 00:28:11.911 }, 00:28:11.911 { 00:28:11.911 "name": "pt2", 00:28:11.911 "uuid": "0746080b-d842-5710-9f20-c460594376a4", 00:28:11.911 "is_configured": true, 00:28:11.911 "data_offset": 2048, 00:28:11.911 "data_size": 63488 00:28:11.911 }, 00:28:11.911 { 00:28:11.911 "name": "pt3", 00:28:11.911 "uuid": "c9ba25aa-d1e6-5d56-b88f-320dd1d3c126", 00:28:11.911 "is_configured": true, 00:28:11.911 "data_offset": 2048, 00:28:11.911 "data_size": 63488 00:28:11.911 }, 00:28:11.911 { 00:28:11.911 "name": "pt4", 00:28:11.911 "uuid": "018e9582-b6b9-517a-9d73-85bd566ebeb6", 00:28:11.911 "is_configured": true, 00:28:11.911 "data_offset": 2048, 00:28:11.911 "data_size": 63488 00:28:11.911 } 00:28:11.911 ] 00:28:11.911 }' 00:28:11.911 12:12:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:11.911 12:12:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.477 12:12:17 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:12.477 12:12:17 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:28:12.735 [2024-11-29 12:12:18.110430] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:12.735 12:12:18 -- bdev/bdev_raid.sh@506 -- # '[' bbebf3d2-1386-4c27-997c-90617e312f74 '!=' bbebf3d2-1386-4c27-997c-90617e312f74 ']' 00:28:12.735 12:12:18 -- bdev/bdev_raid.sh@511 -- # killprocess 141811 00:28:12.735 12:12:18 -- common/autotest_common.sh@936 -- # '[' -z 141811 ']' 00:28:12.735 12:12:18 -- common/autotest_common.sh@940 -- # kill -0 141811 00:28:12.735 12:12:18 -- common/autotest_common.sh@941 -- # uname 00:28:12.735 12:12:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:12.735 12:12:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141811 00:28:12.735 12:12:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:12.735 12:12:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:12.735 12:12:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141811' 00:28:12.735 killing process with pid 141811 00:28:12.735 12:12:18 -- common/autotest_common.sh@955 -- # kill 141811 00:28:12.735 [2024-11-29 12:12:18.163950] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:12.735 12:12:18 -- common/autotest_common.sh@960 -- # wait 141811 00:28:12.735 [2024-11-29 12:12:18.164211] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:12.735 [2024-11-29 12:12:18.164395] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:12.735 [2024-11-29 12:12:18.164512] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:28:12.735 [2024-11-29 12:12:18.220246] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:12.994 12:12:18 -- bdev/bdev_raid.sh@513 -- # return 0 00:28:12.994 00:28:12.994 real 0m23.036s 00:28:12.994 user 0m43.292s 00:28:12.994 sys 0m2.740s 00:28:12.994 12:12:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:12.994 12:12:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.994 ************************************ 00:28:12.994 END TEST raid5f_superblock_test 00:28:12.994 ************************************ 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:28:13.252 12:12:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:13.252 12:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:13.252 12:12:18 -- common/autotest_common.sh@10 -- # set +x 00:28:13.252 ************************************ 00:28:13.252 START TEST raid5f_rebuild_test 00:28:13.252 ************************************ 00:28:13.252 12:12:18 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=142499 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:13.252 12:12:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142499 /var/tmp/spdk-raid.sock 00:28:13.252 12:12:18 -- common/autotest_common.sh@829 -- # '[' -z 142499 ']' 00:28:13.252 12:12:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:13.252 12:12:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.252 12:12:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:13.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:13.252 12:12:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.252 12:12:18 -- common/autotest_common.sh@10 -- # set +x 00:28:13.252 [2024-11-29 12:12:18.594930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:13.252 [2024-11-29 12:12:18.596040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142499 ] 00:28:13.252 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:13.252 Zero copy mechanism will not be used. 00:28:13.252 [2024-11-29 12:12:18.744314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.510 [2024-11-29 12:12:18.840226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.510 [2024-11-29 12:12:18.894560] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:14.075 12:12:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.075 12:12:19 -- common/autotest_common.sh@862 -- # return 0 00:28:14.075 12:12:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:14.075 12:12:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:14.075 12:12:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:14.332 BaseBdev1 00:28:14.332 12:12:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:14.332 12:12:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:14.332 12:12:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:14.590 BaseBdev2 00:28:14.590 12:12:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:14.590 12:12:20 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:14.590 12:12:20 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:14.847 BaseBdev3 00:28:14.847 12:12:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:14.847 12:12:20 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:14.847 12:12:20 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:15.106 BaseBdev4 00:28:15.106 12:12:20 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:15.366 spare_malloc 00:28:15.366 12:12:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:15.632 spare_delay 00:28:15.633 12:12:21 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:15.891 [2024-11-29 12:12:21.313021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:15.891 [2024-11-29 12:12:21.313422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.892 [2024-11-29 12:12:21.313607] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:15.892 [2024-11-29 12:12:21.313782] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.892 [2024-11-29 12:12:21.316808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.892 [2024-11-29 12:12:21.316993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:15.892 spare 00:28:15.892 12:12:21 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:16.150 [2024-11-29 12:12:21.549486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:16.150 [2024-11-29 12:12:21.552074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:16.150 [2024-11-29 12:12:21.552297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:16.150 [2024-11-29 12:12:21.552396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:16.150 [2024-11-29 12:12:21.552612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:28:16.150 [2024-11-29 12:12:21.552663] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:16.150 [2024-11-29 12:12:21.552976] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:28:16.150 [2024-11-29 12:12:21.553903] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:28:16.150 [2024-11-29 12:12:21.554036] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:28:16.150 [2024-11-29 12:12:21.554440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:16.150 12:12:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:16.151 12:12:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:16.151 12:12:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:16.151 12:12:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.151 12:12:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.409 12:12:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:16.409 "name": "raid_bdev1", 00:28:16.409 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:16.409 "strip_size_kb": 64, 00:28:16.409 "state": "online", 00:28:16.409 "raid_level": "raid5f", 00:28:16.409 "superblock": false, 00:28:16.409 "num_base_bdevs": 4, 00:28:16.409 "num_base_bdevs_discovered": 4, 00:28:16.409 "num_base_bdevs_operational": 4, 00:28:16.409 "base_bdevs_list": [ 00:28:16.409 { 00:28:16.409 "name": "BaseBdev1", 00:28:16.409 "uuid": "441a3d03-4a36-4567-a067-4e2a0853e5a2", 00:28:16.409 "is_configured": true, 00:28:16.409 "data_offset": 0, 00:28:16.409 "data_size": 65536 00:28:16.409 }, 00:28:16.409 { 00:28:16.409 "name": "BaseBdev2", 00:28:16.409 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:16.409 "is_configured": true, 00:28:16.409 "data_offset": 0, 00:28:16.409 "data_size": 65536 00:28:16.409 }, 00:28:16.409 { 00:28:16.409 "name": "BaseBdev3", 00:28:16.409 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:16.409 "is_configured": true, 00:28:16.409 "data_offset": 0, 00:28:16.409 "data_size": 65536 00:28:16.409 }, 00:28:16.409 { 00:28:16.409 "name": "BaseBdev4", 00:28:16.409 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:16.409 "is_configured": true, 00:28:16.409 "data_offset": 0, 00:28:16.409 "data_size": 65536 00:28:16.409 } 00:28:16.409 ] 00:28:16.409 }' 00:28:16.409 12:12:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:16.409 12:12:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.345 12:12:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:17.345 12:12:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:28:17.345 [2024-11-29 12:12:22.782940] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.345 12:12:22 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:28:17.345 12:12:22 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:17.345 12:12:22 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.604 12:12:23 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:28:17.604 12:12:23 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:28:17.604 12:12:23 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:28:17.604 12:12:23 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@12 -- # local i 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.604 12:12:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:17.862 [2024-11-29 12:12:23.318962] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:28:17.862 /dev/nbd0 00:28:17.862 12:12:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:17.862 12:12:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:17.862 12:12:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:17.862 12:12:23 -- common/autotest_common.sh@867 -- # local i 00:28:17.862 12:12:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:17.862 12:12:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:17.862 12:12:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:18.120 12:12:23 -- common/autotest_common.sh@871 -- # break 00:28:18.120 12:12:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:18.120 12:12:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:18.120 12:12:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:18.120 1+0 records in 00:28:18.120 1+0 records out 00:28:18.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567249 s, 7.2 MB/s 00:28:18.120 12:12:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:18.120 12:12:23 -- common/autotest_common.sh@884 -- # size=4096 00:28:18.120 12:12:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:18.120 12:12:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:18.120 12:12:23 -- common/autotest_common.sh@887 -- # return 0 00:28:18.120 12:12:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:18.120 12:12:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:18.120 12:12:23 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:28:18.120 12:12:23 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:28:18.120 12:12:23 -- bdev/bdev_raid.sh@582 -- # echo 192 00:28:18.120 12:12:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:28:18.687 512+0 records in 00:28:18.687 512+0 records out 00:28:18.687 100663296 bytes (101 MB, 96 MiB) copied, 0.553992 s, 182 MB/s 00:28:18.688 12:12:23 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:18.688 12:12:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:18.688 12:12:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:18.688 12:12:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:18.688 12:12:23 -- bdev/nbd_common.sh@51 -- # local i 00:28:18.688 12:12:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.688 12:12:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:18.947 [2024-11-29 12:12:24.248748] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@41 -- # break 00:28:18.947 12:12:24 -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.947 12:12:24 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:19.205 [2024-11-29 12:12:24.472363] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.205 12:12:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.463 12:12:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:19.463 "name": "raid_bdev1", 00:28:19.463 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:19.463 "strip_size_kb": 64, 00:28:19.463 "state": "online", 00:28:19.463 "raid_level": "raid5f", 00:28:19.463 "superblock": false, 00:28:19.463 "num_base_bdevs": 4, 00:28:19.463 "num_base_bdevs_discovered": 3, 00:28:19.463 "num_base_bdevs_operational": 3, 00:28:19.463 "base_bdevs_list": [ 00:28:19.463 { 00:28:19.463 "name": null, 00:28:19.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.463 "is_configured": false, 00:28:19.463 "data_offset": 0, 00:28:19.463 "data_size": 65536 00:28:19.463 }, 00:28:19.463 { 00:28:19.464 "name": "BaseBdev2", 00:28:19.464 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:19.464 "is_configured": true, 00:28:19.464 "data_offset": 0, 00:28:19.464 "data_size": 65536 00:28:19.464 }, 00:28:19.464 { 00:28:19.464 "name": "BaseBdev3", 00:28:19.464 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:19.464 "is_configured": true, 00:28:19.464 "data_offset": 0, 00:28:19.464 "data_size": 65536 00:28:19.464 }, 00:28:19.464 { 00:28:19.464 "name": "BaseBdev4", 00:28:19.464 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:19.464 "is_configured": true, 00:28:19.464 "data_offset": 0, 00:28:19.464 "data_size": 65536 00:28:19.464 } 00:28:19.464 ] 00:28:19.464 }' 00:28:19.464 12:12:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:19.464 12:12:24 -- common/autotest_common.sh@10 -- # set +x 00:28:20.029 12:12:25 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:20.287 [2024-11-29 12:12:25.608653] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:20.287 [2024-11-29 12:12:25.608990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.287 [2024-11-29 12:12:25.613575] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:28:20.287 [2024-11-29 12:12:25.616616] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:20.288 12:12:25 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.223 12:12:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.480 12:12:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:21.480 "name": "raid_bdev1", 00:28:21.480 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:21.480 "strip_size_kb": 64, 00:28:21.480 "state": "online", 00:28:21.481 "raid_level": "raid5f", 00:28:21.481 "superblock": false, 00:28:21.481 "num_base_bdevs": 4, 00:28:21.481 "num_base_bdevs_discovered": 4, 00:28:21.481 "num_base_bdevs_operational": 4, 00:28:21.481 "process": { 00:28:21.481 "type": "rebuild", 00:28:21.481 "target": "spare", 00:28:21.481 "progress": { 00:28:21.481 "blocks": 23040, 00:28:21.481 "percent": 11 00:28:21.481 } 00:28:21.481 }, 00:28:21.481 "base_bdevs_list": [ 00:28:21.481 { 00:28:21.481 "name": "spare", 00:28:21.481 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:21.481 "is_configured": true, 00:28:21.481 "data_offset": 0, 00:28:21.481 "data_size": 65536 00:28:21.481 }, 00:28:21.481 { 00:28:21.481 "name": "BaseBdev2", 00:28:21.481 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:21.481 "is_configured": true, 00:28:21.481 "data_offset": 0, 00:28:21.481 "data_size": 65536 00:28:21.481 }, 00:28:21.481 { 00:28:21.481 "name": "BaseBdev3", 00:28:21.481 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:21.481 "is_configured": true, 00:28:21.481 "data_offset": 0, 00:28:21.481 "data_size": 65536 00:28:21.481 }, 00:28:21.481 { 00:28:21.481 "name": "BaseBdev4", 00:28:21.481 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:21.481 "is_configured": true, 00:28:21.481 "data_offset": 0, 00:28:21.481 "data_size": 65536 00:28:21.481 } 00:28:21.481 ] 00:28:21.481 }' 00:28:21.481 12:12:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:21.481 12:12:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:21.481 12:12:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:21.739 12:12:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:21.739 12:12:26 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:21.739 [2024-11-29 12:12:27.214380] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.739 [2024-11-29 12:12:27.231229] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:21.739 [2024-11-29 12:12:27.231608] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:22.058 "name": "raid_bdev1", 00:28:22.058 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:22.058 "strip_size_kb": 64, 00:28:22.058 "state": "online", 00:28:22.058 "raid_level": "raid5f", 00:28:22.058 "superblock": false, 00:28:22.058 "num_base_bdevs": 4, 00:28:22.058 "num_base_bdevs_discovered": 3, 00:28:22.058 "num_base_bdevs_operational": 3, 00:28:22.058 "base_bdevs_list": [ 00:28:22.058 { 00:28:22.058 "name": null, 00:28:22.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.058 "is_configured": false, 00:28:22.058 "data_offset": 0, 00:28:22.058 "data_size": 65536 00:28:22.058 }, 00:28:22.058 { 00:28:22.058 "name": "BaseBdev2", 00:28:22.058 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:22.058 "is_configured": true, 00:28:22.058 "data_offset": 0, 00:28:22.058 "data_size": 65536 00:28:22.058 }, 00:28:22.058 { 00:28:22.058 "name": "BaseBdev3", 00:28:22.058 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:22.058 "is_configured": true, 00:28:22.058 "data_offset": 0, 00:28:22.058 "data_size": 65536 00:28:22.058 }, 00:28:22.058 { 00:28:22.058 "name": "BaseBdev4", 00:28:22.058 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:22.058 "is_configured": true, 00:28:22.058 "data_offset": 0, 00:28:22.058 "data_size": 65536 00:28:22.058 } 00:28:22.058 ] 00:28:22.058 }' 00:28:22.058 12:12:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:22.058 12:12:27 -- common/autotest_common.sh@10 -- # set +x 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.646 12:12:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.905 12:12:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:22.905 "name": "raid_bdev1", 00:28:22.905 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:22.905 "strip_size_kb": 64, 00:28:22.905 "state": "online", 00:28:22.905 "raid_level": "raid5f", 00:28:22.905 "superblock": false, 00:28:22.905 "num_base_bdevs": 4, 00:28:22.905 "num_base_bdevs_discovered": 3, 00:28:22.905 "num_base_bdevs_operational": 3, 00:28:22.905 "base_bdevs_list": [ 00:28:22.905 { 00:28:22.905 "name": null, 00:28:22.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.905 "is_configured": false, 00:28:22.905 "data_offset": 0, 00:28:22.905 "data_size": 65536 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "BaseBdev2", 00:28:22.905 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:22.905 "is_configured": true, 00:28:22.905 "data_offset": 0, 00:28:22.905 "data_size": 65536 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "BaseBdev3", 00:28:22.905 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:22.905 "is_configured": true, 00:28:22.905 "data_offset": 0, 00:28:22.905 "data_size": 65536 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "BaseBdev4", 00:28:22.905 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:22.905 "is_configured": true, 00:28:22.905 "data_offset": 0, 00:28:22.905 "data_size": 65536 00:28:22.905 } 00:28:22.905 ] 00:28:22.905 }' 00:28:22.905 12:12:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:23.163 12:12:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:23.163 12:12:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:23.163 12:12:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:23.163 12:12:28 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:23.422 [2024-11-29 12:12:28.740650] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:23.422 [2024-11-29 12:12:28.741009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:23.422 [2024-11-29 12:12:28.745451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:28:23.422 [2024-11-29 12:12:28.748271] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:23.422 12:12:28 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.357 12:12:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.615 12:12:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:24.615 "name": "raid_bdev1", 00:28:24.615 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:24.615 "strip_size_kb": 64, 00:28:24.615 "state": "online", 00:28:24.615 "raid_level": "raid5f", 00:28:24.615 "superblock": false, 00:28:24.615 "num_base_bdevs": 4, 00:28:24.615 "num_base_bdevs_discovered": 4, 00:28:24.615 "num_base_bdevs_operational": 4, 00:28:24.615 "process": { 00:28:24.615 "type": "rebuild", 00:28:24.615 "target": "spare", 00:28:24.615 "progress": { 00:28:24.615 "blocks": 23040, 00:28:24.615 "percent": 11 00:28:24.615 } 00:28:24.615 }, 00:28:24.615 "base_bdevs_list": [ 00:28:24.615 { 00:28:24.615 "name": "spare", 00:28:24.615 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:24.615 "is_configured": true, 00:28:24.615 "data_offset": 0, 00:28:24.615 "data_size": 65536 00:28:24.615 }, 00:28:24.615 { 00:28:24.615 "name": "BaseBdev2", 00:28:24.615 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:24.615 "is_configured": true, 00:28:24.615 "data_offset": 0, 00:28:24.615 "data_size": 65536 00:28:24.615 }, 00:28:24.615 { 00:28:24.615 "name": "BaseBdev3", 00:28:24.615 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:24.615 "is_configured": true, 00:28:24.615 "data_offset": 0, 00:28:24.615 "data_size": 65536 00:28:24.615 }, 00:28:24.615 { 00:28:24.615 "name": "BaseBdev4", 00:28:24.615 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:24.615 "is_configured": true, 00:28:24.615 "data_offset": 0, 00:28:24.615 "data_size": 65536 00:28:24.615 } 00:28:24.615 ] 00:28:24.615 }' 00:28:24.615 12:12:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:24.615 12:12:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:24.615 12:12:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@657 -- # local timeout=724 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.873 12:12:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:24.873 "name": "raid_bdev1", 00:28:24.873 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:24.873 "strip_size_kb": 64, 00:28:24.873 "state": "online", 00:28:24.873 "raid_level": "raid5f", 00:28:24.873 "superblock": false, 00:28:24.873 "num_base_bdevs": 4, 00:28:24.873 "num_base_bdevs_discovered": 4, 00:28:24.873 "num_base_bdevs_operational": 4, 00:28:24.873 "process": { 00:28:24.873 "type": "rebuild", 00:28:24.873 "target": "spare", 00:28:24.873 "progress": { 00:28:24.873 "blocks": 30720, 00:28:24.873 "percent": 15 00:28:24.873 } 00:28:24.873 }, 00:28:24.873 "base_bdevs_list": [ 00:28:24.873 { 00:28:24.873 "name": "spare", 00:28:24.873 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:24.873 "is_configured": true, 00:28:24.873 "data_offset": 0, 00:28:24.873 "data_size": 65536 00:28:24.873 }, 00:28:24.873 { 00:28:24.873 "name": "BaseBdev2", 00:28:24.873 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:24.873 "is_configured": true, 00:28:24.873 "data_offset": 0, 00:28:24.873 "data_size": 65536 00:28:24.873 }, 00:28:24.873 { 00:28:24.873 "name": "BaseBdev3", 00:28:24.873 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:24.873 "is_configured": true, 00:28:24.873 "data_offset": 0, 00:28:24.873 "data_size": 65536 00:28:24.873 }, 00:28:24.873 { 00:28:24.873 "name": "BaseBdev4", 00:28:24.873 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:24.873 "is_configured": true, 00:28:24.873 "data_offset": 0, 00:28:24.873 "data_size": 65536 00:28:24.873 } 00:28:24.873 ] 00:28:24.873 }' 00:28:25.132 12:12:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:25.132 12:12:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.132 12:12:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:25.132 12:12:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.132 12:12:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.067 12:12:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.324 12:12:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:26.324 "name": "raid_bdev1", 00:28:26.324 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:26.324 "strip_size_kb": 64, 00:28:26.324 "state": "online", 00:28:26.324 "raid_level": "raid5f", 00:28:26.324 "superblock": false, 00:28:26.324 "num_base_bdevs": 4, 00:28:26.324 "num_base_bdevs_discovered": 4, 00:28:26.324 "num_base_bdevs_operational": 4, 00:28:26.324 "process": { 00:28:26.324 "type": "rebuild", 00:28:26.324 "target": "spare", 00:28:26.324 "progress": { 00:28:26.324 "blocks": 55680, 00:28:26.324 "percent": 28 00:28:26.324 } 00:28:26.324 }, 00:28:26.324 "base_bdevs_list": [ 00:28:26.324 { 00:28:26.324 "name": "spare", 00:28:26.325 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:26.325 "is_configured": true, 00:28:26.325 "data_offset": 0, 00:28:26.325 "data_size": 65536 00:28:26.325 }, 00:28:26.325 { 00:28:26.325 "name": "BaseBdev2", 00:28:26.325 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:26.325 "is_configured": true, 00:28:26.325 "data_offset": 0, 00:28:26.325 "data_size": 65536 00:28:26.325 }, 00:28:26.325 { 00:28:26.325 "name": "BaseBdev3", 00:28:26.325 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:26.325 "is_configured": true, 00:28:26.325 "data_offset": 0, 00:28:26.325 "data_size": 65536 00:28:26.325 }, 00:28:26.325 { 00:28:26.325 "name": "BaseBdev4", 00:28:26.325 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:26.325 "is_configured": true, 00:28:26.325 "data_offset": 0, 00:28:26.325 "data_size": 65536 00:28:26.325 } 00:28:26.325 ] 00:28:26.325 }' 00:28:26.325 12:12:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:26.325 12:12:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:26.325 12:12:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:26.325 12:12:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:26.325 12:12:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.706 12:12:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.706 12:12:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:27.706 "name": "raid_bdev1", 00:28:27.706 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:27.706 "strip_size_kb": 64, 00:28:27.706 "state": "online", 00:28:27.706 "raid_level": "raid5f", 00:28:27.706 "superblock": false, 00:28:27.706 "num_base_bdevs": 4, 00:28:27.706 "num_base_bdevs_discovered": 4, 00:28:27.706 "num_base_bdevs_operational": 4, 00:28:27.706 "process": { 00:28:27.706 "type": "rebuild", 00:28:27.706 "target": "spare", 00:28:27.706 "progress": { 00:28:27.706 "blocks": 82560, 00:28:27.706 "percent": 41 00:28:27.706 } 00:28:27.706 }, 00:28:27.706 "base_bdevs_list": [ 00:28:27.706 { 00:28:27.706 "name": "spare", 00:28:27.706 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:27.706 "is_configured": true, 00:28:27.706 "data_offset": 0, 00:28:27.706 "data_size": 65536 00:28:27.706 }, 00:28:27.706 { 00:28:27.706 "name": "BaseBdev2", 00:28:27.706 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:27.706 "is_configured": true, 00:28:27.706 "data_offset": 0, 00:28:27.706 "data_size": 65536 00:28:27.706 }, 00:28:27.706 { 00:28:27.706 "name": "BaseBdev3", 00:28:27.706 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:27.706 "is_configured": true, 00:28:27.706 "data_offset": 0, 00:28:27.706 "data_size": 65536 00:28:27.706 }, 00:28:27.706 { 00:28:27.706 "name": "BaseBdev4", 00:28:27.706 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:27.706 "is_configured": true, 00:28:27.706 "data_offset": 0, 00:28:27.706 "data_size": 65536 00:28:27.706 } 00:28:27.706 ] 00:28:27.706 }' 00:28:27.706 12:12:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:27.706 12:12:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.706 12:12:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:27.706 12:12:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.706 12:12:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:29.087 "name": "raid_bdev1", 00:28:29.087 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:29.087 "strip_size_kb": 64, 00:28:29.087 "state": "online", 00:28:29.087 "raid_level": "raid5f", 00:28:29.087 "superblock": false, 00:28:29.087 "num_base_bdevs": 4, 00:28:29.087 "num_base_bdevs_discovered": 4, 00:28:29.087 "num_base_bdevs_operational": 4, 00:28:29.087 "process": { 00:28:29.087 "type": "rebuild", 00:28:29.087 "target": "spare", 00:28:29.087 "progress": { 00:28:29.087 "blocks": 107520, 00:28:29.087 "percent": 54 00:28:29.087 } 00:28:29.087 }, 00:28:29.087 "base_bdevs_list": [ 00:28:29.087 { 00:28:29.087 "name": "spare", 00:28:29.087 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:29.087 "is_configured": true, 00:28:29.087 "data_offset": 0, 00:28:29.087 "data_size": 65536 00:28:29.087 }, 00:28:29.087 { 00:28:29.087 "name": "BaseBdev2", 00:28:29.087 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:29.087 "is_configured": true, 00:28:29.087 "data_offset": 0, 00:28:29.087 "data_size": 65536 00:28:29.087 }, 00:28:29.087 { 00:28:29.087 "name": "BaseBdev3", 00:28:29.087 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:29.087 "is_configured": true, 00:28:29.087 "data_offset": 0, 00:28:29.087 "data_size": 65536 00:28:29.087 }, 00:28:29.087 { 00:28:29.087 "name": "BaseBdev4", 00:28:29.087 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:29.087 "is_configured": true, 00:28:29.087 "data_offset": 0, 00:28:29.087 "data_size": 65536 00:28:29.087 } 00:28:29.087 ] 00:28:29.087 }' 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.087 12:12:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:30.464 "name": "raid_bdev1", 00:28:30.464 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:30.464 "strip_size_kb": 64, 00:28:30.464 "state": "online", 00:28:30.464 "raid_level": "raid5f", 00:28:30.464 "superblock": false, 00:28:30.464 "num_base_bdevs": 4, 00:28:30.464 "num_base_bdevs_discovered": 4, 00:28:30.464 "num_base_bdevs_operational": 4, 00:28:30.464 "process": { 00:28:30.464 "type": "rebuild", 00:28:30.464 "target": "spare", 00:28:30.464 "progress": { 00:28:30.464 "blocks": 134400, 00:28:30.464 "percent": 68 00:28:30.464 } 00:28:30.464 }, 00:28:30.464 "base_bdevs_list": [ 00:28:30.464 { 00:28:30.464 "name": "spare", 00:28:30.464 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:30.464 "is_configured": true, 00:28:30.464 "data_offset": 0, 00:28:30.464 "data_size": 65536 00:28:30.464 }, 00:28:30.464 { 00:28:30.464 "name": "BaseBdev2", 00:28:30.464 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:30.464 "is_configured": true, 00:28:30.464 "data_offset": 0, 00:28:30.464 "data_size": 65536 00:28:30.464 }, 00:28:30.464 { 00:28:30.464 "name": "BaseBdev3", 00:28:30.464 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:30.464 "is_configured": true, 00:28:30.464 "data_offset": 0, 00:28:30.464 "data_size": 65536 00:28:30.464 }, 00:28:30.464 { 00:28:30.464 "name": "BaseBdev4", 00:28:30.464 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:30.464 "is_configured": true, 00:28:30.464 "data_offset": 0, 00:28:30.464 "data_size": 65536 00:28:30.464 } 00:28:30.464 ] 00:28:30.464 }' 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:30.464 12:12:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.837 12:12:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.837 12:12:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:31.837 "name": "raid_bdev1", 00:28:31.837 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:31.837 "strip_size_kb": 64, 00:28:31.837 "state": "online", 00:28:31.837 "raid_level": "raid5f", 00:28:31.837 "superblock": false, 00:28:31.837 "num_base_bdevs": 4, 00:28:31.837 "num_base_bdevs_discovered": 4, 00:28:31.837 "num_base_bdevs_operational": 4, 00:28:31.837 "process": { 00:28:31.837 "type": "rebuild", 00:28:31.837 "target": "spare", 00:28:31.837 "progress": { 00:28:31.837 "blocks": 159360, 00:28:31.837 "percent": 81 00:28:31.837 } 00:28:31.837 }, 00:28:31.837 "base_bdevs_list": [ 00:28:31.837 { 00:28:31.837 "name": "spare", 00:28:31.837 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:31.837 "is_configured": true, 00:28:31.837 "data_offset": 0, 00:28:31.837 "data_size": 65536 00:28:31.837 }, 00:28:31.837 { 00:28:31.837 "name": "BaseBdev2", 00:28:31.837 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:31.837 "is_configured": true, 00:28:31.837 "data_offset": 0, 00:28:31.837 "data_size": 65536 00:28:31.837 }, 00:28:31.837 { 00:28:31.837 "name": "BaseBdev3", 00:28:31.837 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:31.837 "is_configured": true, 00:28:31.837 "data_offset": 0, 00:28:31.837 "data_size": 65536 00:28:31.837 }, 00:28:31.837 { 00:28:31.837 "name": "BaseBdev4", 00:28:31.837 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:31.837 "is_configured": true, 00:28:31.837 "data_offset": 0, 00:28:31.837 "data_size": 65536 00:28:31.837 } 00:28:31.837 ] 00:28:31.837 }' 00:28:31.837 12:12:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:31.837 12:12:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:31.837 12:12:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:31.837 12:12:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.837 12:12:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:33.212 "name": "raid_bdev1", 00:28:33.212 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:33.212 "strip_size_kb": 64, 00:28:33.212 "state": "online", 00:28:33.212 "raid_level": "raid5f", 00:28:33.212 "superblock": false, 00:28:33.212 "num_base_bdevs": 4, 00:28:33.212 "num_base_bdevs_discovered": 4, 00:28:33.212 "num_base_bdevs_operational": 4, 00:28:33.212 "process": { 00:28:33.212 "type": "rebuild", 00:28:33.212 "target": "spare", 00:28:33.212 "progress": { 00:28:33.212 "blocks": 186240, 00:28:33.212 "percent": 94 00:28:33.212 } 00:28:33.212 }, 00:28:33.212 "base_bdevs_list": [ 00:28:33.212 { 00:28:33.212 "name": "spare", 00:28:33.212 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:33.212 "is_configured": true, 00:28:33.212 "data_offset": 0, 00:28:33.212 "data_size": 65536 00:28:33.212 }, 00:28:33.212 { 00:28:33.212 "name": "BaseBdev2", 00:28:33.212 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:33.212 "is_configured": true, 00:28:33.212 "data_offset": 0, 00:28:33.212 "data_size": 65536 00:28:33.212 }, 00:28:33.212 { 00:28:33.212 "name": "BaseBdev3", 00:28:33.212 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:33.212 "is_configured": true, 00:28:33.212 "data_offset": 0, 00:28:33.212 "data_size": 65536 00:28:33.212 }, 00:28:33.212 { 00:28:33.212 "name": "BaseBdev4", 00:28:33.212 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:33.212 "is_configured": true, 00:28:33.212 "data_offset": 0, 00:28:33.212 "data_size": 65536 00:28:33.212 } 00:28:33.212 ] 00:28:33.212 }' 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.212 12:12:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:33.779 [2024-11-29 12:12:39.138519] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:33.779 [2024-11-29 12:12:39.138936] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:33.779 [2024-11-29 12:12:39.139157] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.345 12:12:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.602 12:12:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:34.602 "name": "raid_bdev1", 00:28:34.602 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:34.602 "strip_size_kb": 64, 00:28:34.602 "state": "online", 00:28:34.602 "raid_level": "raid5f", 00:28:34.602 "superblock": false, 00:28:34.602 "num_base_bdevs": 4, 00:28:34.602 "num_base_bdevs_discovered": 4, 00:28:34.602 "num_base_bdevs_operational": 4, 00:28:34.602 "base_bdevs_list": [ 00:28:34.602 { 00:28:34.602 "name": "spare", 00:28:34.602 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:34.602 "is_configured": true, 00:28:34.602 "data_offset": 0, 00:28:34.602 "data_size": 65536 00:28:34.602 }, 00:28:34.602 { 00:28:34.602 "name": "BaseBdev2", 00:28:34.602 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:34.603 "is_configured": true, 00:28:34.603 "data_offset": 0, 00:28:34.603 "data_size": 65536 00:28:34.603 }, 00:28:34.603 { 00:28:34.603 "name": "BaseBdev3", 00:28:34.603 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:34.603 "is_configured": true, 00:28:34.603 "data_offset": 0, 00:28:34.603 "data_size": 65536 00:28:34.603 }, 00:28:34.603 { 00:28:34.603 "name": "BaseBdev4", 00:28:34.603 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:34.603 "is_configured": true, 00:28:34.603 "data_offset": 0, 00:28:34.603 "data_size": 65536 00:28:34.603 } 00:28:34.603 ] 00:28:34.603 }' 00:28:34.603 12:12:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@660 -- # break 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.603 12:12:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.861 12:12:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:34.861 "name": "raid_bdev1", 00:28:34.861 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:34.861 "strip_size_kb": 64, 00:28:34.861 "state": "online", 00:28:34.861 "raid_level": "raid5f", 00:28:34.861 "superblock": false, 00:28:34.861 "num_base_bdevs": 4, 00:28:34.861 "num_base_bdevs_discovered": 4, 00:28:34.861 "num_base_bdevs_operational": 4, 00:28:34.861 "base_bdevs_list": [ 00:28:34.861 { 00:28:34.861 "name": "spare", 00:28:34.861 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:34.861 "is_configured": true, 00:28:34.861 "data_offset": 0, 00:28:34.861 "data_size": 65536 00:28:34.861 }, 00:28:34.861 { 00:28:34.861 "name": "BaseBdev2", 00:28:34.861 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:34.861 "is_configured": true, 00:28:34.861 "data_offset": 0, 00:28:34.861 "data_size": 65536 00:28:34.861 }, 00:28:34.861 { 00:28:34.861 "name": "BaseBdev3", 00:28:34.861 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:34.861 "is_configured": true, 00:28:34.861 "data_offset": 0, 00:28:34.861 "data_size": 65536 00:28:34.861 }, 00:28:34.861 { 00:28:34.861 "name": "BaseBdev4", 00:28:34.861 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:34.861 "is_configured": true, 00:28:34.861 "data_offset": 0, 00:28:34.861 "data_size": 65536 00:28:34.861 } 00:28:34.861 ] 00:28:34.861 }' 00:28:34.861 12:12:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:34.861 12:12:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:34.861 12:12:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:35.117 12:12:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.118 12:12:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.376 12:12:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:35.376 "name": "raid_bdev1", 00:28:35.376 "uuid": "de95c6fe-92f6-4d6c-8777-294e8a817741", 00:28:35.376 "strip_size_kb": 64, 00:28:35.376 "state": "online", 00:28:35.376 "raid_level": "raid5f", 00:28:35.376 "superblock": false, 00:28:35.376 "num_base_bdevs": 4, 00:28:35.376 "num_base_bdevs_discovered": 4, 00:28:35.376 "num_base_bdevs_operational": 4, 00:28:35.376 "base_bdevs_list": [ 00:28:35.376 { 00:28:35.376 "name": "spare", 00:28:35.376 "uuid": "e150c279-326a-5bcf-be99-c3d04beb7752", 00:28:35.376 "is_configured": true, 00:28:35.376 "data_offset": 0, 00:28:35.376 "data_size": 65536 00:28:35.376 }, 00:28:35.376 { 00:28:35.376 "name": "BaseBdev2", 00:28:35.376 "uuid": "50f9a7da-f5a7-47b7-a428-2d1f85bfc048", 00:28:35.376 "is_configured": true, 00:28:35.376 "data_offset": 0, 00:28:35.376 "data_size": 65536 00:28:35.376 }, 00:28:35.376 { 00:28:35.376 "name": "BaseBdev3", 00:28:35.376 "uuid": "df62e064-df26-4a44-802b-439f5b780383", 00:28:35.376 "is_configured": true, 00:28:35.376 "data_offset": 0, 00:28:35.376 "data_size": 65536 00:28:35.376 }, 00:28:35.376 { 00:28:35.376 "name": "BaseBdev4", 00:28:35.376 "uuid": "1de3474c-5cf2-4b58-8f12-fe0c515181d6", 00:28:35.376 "is_configured": true, 00:28:35.376 "data_offset": 0, 00:28:35.376 "data_size": 65536 00:28:35.376 } 00:28:35.376 ] 00:28:35.376 }' 00:28:35.376 12:12:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:35.376 12:12:40 -- common/autotest_common.sh@10 -- # set +x 00:28:35.945 12:12:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:36.204 [2024-11-29 12:12:41.476739] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:36.204 [2024-11-29 12:12:41.477055] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:36.204 [2024-11-29 12:12:41.477302] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:36.204 [2024-11-29 12:12:41.477528] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:36.204 [2024-11-29 12:12:41.477655] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:28:36.204 12:12:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.204 12:12:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:28:36.462 12:12:41 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:28:36.462 12:12:41 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:28:36.462 12:12:41 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@12 -- # local i 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.462 12:12:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:36.720 /dev/nbd0 00:28:36.720 12:12:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:36.720 12:12:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:36.720 12:12:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:36.720 12:12:42 -- common/autotest_common.sh@867 -- # local i 00:28:36.720 12:12:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:36.720 12:12:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:36.720 12:12:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:36.720 12:12:42 -- common/autotest_common.sh@871 -- # break 00:28:36.720 12:12:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:36.720 12:12:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:36.720 12:12:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.720 1+0 records in 00:28:36.720 1+0 records out 00:28:36.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499269 s, 8.2 MB/s 00:28:36.720 12:12:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.720 12:12:42 -- common/autotest_common.sh@884 -- # size=4096 00:28:36.720 12:12:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.720 12:12:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:36.720 12:12:42 -- common/autotest_common.sh@887 -- # return 0 00:28:36.720 12:12:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.720 12:12:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.720 12:12:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:36.978 /dev/nbd1 00:28:36.978 12:12:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:36.978 12:12:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:36.978 12:12:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:36.978 12:12:42 -- common/autotest_common.sh@867 -- # local i 00:28:36.978 12:12:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:36.978 12:12:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:36.978 12:12:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:36.978 12:12:42 -- common/autotest_common.sh@871 -- # break 00:28:36.978 12:12:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:36.978 12:12:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:36.978 12:12:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.978 1+0 records in 00:28:36.978 1+0 records out 00:28:36.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647582 s, 6.3 MB/s 00:28:36.978 12:12:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.978 12:12:42 -- common/autotest_common.sh@884 -- # size=4096 00:28:36.978 12:12:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.978 12:12:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:36.978 12:12:42 -- common/autotest_common.sh@887 -- # return 0 00:28:36.978 12:12:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.978 12:12:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.978 12:12:42 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:37.236 12:12:42 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@51 -- # local i 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:37.236 12:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@41 -- # break 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@41 -- # break 00:28:37.494 12:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.494 12:12:42 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:28:37.494 12:12:42 -- bdev/bdev_raid.sh@709 -- # killprocess 142499 00:28:37.494 12:12:42 -- common/autotest_common.sh@936 -- # '[' -z 142499 ']' 00:28:37.494 12:12:42 -- common/autotest_common.sh@940 -- # kill -0 142499 00:28:37.494 12:12:42 -- common/autotest_common.sh@941 -- # uname 00:28:37.495 12:12:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:37.753 12:12:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142499 00:28:37.753 12:12:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:37.753 12:12:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:37.753 12:12:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142499' 00:28:37.753 killing process with pid 142499 00:28:37.753 12:12:43 -- common/autotest_common.sh@955 -- # kill 142499 00:28:37.753 Received shutdown signal, test time was about 60.000000 seconds 00:28:37.753 00:28:37.753 Latency(us) 00:28:37.753 [2024-11-29T12:12:43.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.753 [2024-11-29T12:12:43.264Z] =================================================================================================================== 00:28:37.753 [2024-11-29T12:12:43.264Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:37.753 12:12:43 -- common/autotest_common.sh@960 -- # wait 142499 00:28:37.753 [2024-11-29 12:12:43.027555] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:37.753 [2024-11-29 12:12:43.092089] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@711 -- # return 0 00:28:38.011 00:28:38.011 real 0m24.819s 00:28:38.011 user 0m37.020s 00:28:38.011 sys 0m2.968s 00:28:38.011 12:12:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:38.011 12:12:43 -- common/autotest_common.sh@10 -- # set +x 00:28:38.011 ************************************ 00:28:38.011 END TEST raid5f_rebuild_test 00:28:38.011 ************************************ 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:28:38.011 12:12:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:38.011 12:12:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:38.011 12:12:43 -- common/autotest_common.sh@10 -- # set +x 00:28:38.011 ************************************ 00:28:38.011 START TEST raid5f_rebuild_test_sb 00:28:38.011 ************************************ 00:28:38.011 12:12:43 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@544 -- # raid_pid=143101 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:38.011 12:12:43 -- bdev/bdev_raid.sh@545 -- # waitforlisten 143101 /var/tmp/spdk-raid.sock 00:28:38.011 12:12:43 -- common/autotest_common.sh@829 -- # '[' -z 143101 ']' 00:28:38.011 12:12:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:38.011 12:12:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.011 12:12:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:38.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:38.012 12:12:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.012 12:12:43 -- common/autotest_common.sh@10 -- # set +x 00:28:38.012 [2024-11-29 12:12:43.482797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:38.012 [2024-11-29 12:12:43.483308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143101 ] 00:28:38.012 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:38.012 Zero copy mechanism will not be used. 00:28:38.269 [2024-11-29 12:12:43.627146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.269 [2024-11-29 12:12:43.730043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.528 [2024-11-29 12:12:43.784258] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:39.094 12:12:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.094 12:12:44 -- common/autotest_common.sh@862 -- # return 0 00:28:39.094 12:12:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:39.094 12:12:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:39.094 12:12:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:39.352 BaseBdev1_malloc 00:28:39.352 12:12:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:39.610 [2024-11-29 12:12:44.991445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:39.610 [2024-11-29 12:12:44.991891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:39.610 [2024-11-29 12:12:44.992093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:28:39.610 [2024-11-29 12:12:44.992255] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:39.610 [2024-11-29 12:12:44.995132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:39.610 [2024-11-29 12:12:44.995328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:39.610 BaseBdev1 00:28:39.610 12:12:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:39.610 12:12:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:39.611 12:12:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:39.867 BaseBdev2_malloc 00:28:39.867 12:12:45 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:40.125 [2024-11-29 12:12:45.482988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:40.125 [2024-11-29 12:12:45.483391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.125 [2024-11-29 12:12:45.483481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:40.125 [2024-11-29 12:12:45.483652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.125 [2024-11-29 12:12:45.486257] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.125 [2024-11-29 12:12:45.486463] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:40.125 BaseBdev2 00:28:40.125 12:12:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:40.125 12:12:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:40.125 12:12:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:40.384 BaseBdev3_malloc 00:28:40.384 12:12:45 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:40.642 [2024-11-29 12:12:46.049747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:40.642 [2024-11-29 12:12:46.050033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.642 [2024-11-29 12:12:46.050211] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:40.642 [2024-11-29 12:12:46.050390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.642 [2024-11-29 12:12:46.053098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.642 [2024-11-29 12:12:46.053277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:40.642 BaseBdev3 00:28:40.642 12:12:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:40.642 12:12:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:40.642 12:12:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:40.900 BaseBdev4_malloc 00:28:40.900 12:12:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:41.158 [2024-11-29 12:12:46.533297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:41.158 [2024-11-29 12:12:46.533693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.158 [2024-11-29 12:12:46.533779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:41.158 [2024-11-29 12:12:46.534051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.158 [2024-11-29 12:12:46.536727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.158 [2024-11-29 12:12:46.536929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:41.158 BaseBdev4 00:28:41.158 12:12:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:41.417 spare_malloc 00:28:41.417 12:12:46 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:41.675 spare_delay 00:28:41.675 12:12:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:41.933 [2024-11-29 12:12:47.316657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:41.934 [2024-11-29 12:12:47.317045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.934 [2024-11-29 12:12:47.317206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:41.934 [2024-11-29 12:12:47.317355] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.934 [2024-11-29 12:12:47.320149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.934 [2024-11-29 12:12:47.320324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:41.934 spare 00:28:41.934 12:12:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:42.192 [2024-11-29 12:12:47.588834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:42.192 [2024-11-29 12:12:47.591437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:42.192 [2024-11-29 12:12:47.591673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:42.192 [2024-11-29 12:12:47.591780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:42.192 [2024-11-29 12:12:47.592110] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:42.192 [2024-11-29 12:12:47.592237] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:42.192 [2024-11-29 12:12:47.592446] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:42.192 [2024-11-29 12:12:47.593352] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:42.192 [2024-11-29 12:12:47.593477] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:42.192 [2024-11-29 12:12:47.593855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.192 12:12:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.450 12:12:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:42.450 "name": "raid_bdev1", 00:28:42.450 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:42.450 "strip_size_kb": 64, 00:28:42.450 "state": "online", 00:28:42.450 "raid_level": "raid5f", 00:28:42.450 "superblock": true, 00:28:42.450 "num_base_bdevs": 4, 00:28:42.450 "num_base_bdevs_discovered": 4, 00:28:42.450 "num_base_bdevs_operational": 4, 00:28:42.450 "base_bdevs_list": [ 00:28:42.450 { 00:28:42.450 "name": "BaseBdev1", 00:28:42.450 "uuid": "d7076d66-33e8-553a-8f95-097cb631e7fe", 00:28:42.450 "is_configured": true, 00:28:42.450 "data_offset": 2048, 00:28:42.450 "data_size": 63488 00:28:42.450 }, 00:28:42.450 { 00:28:42.450 "name": "BaseBdev2", 00:28:42.450 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:42.450 "is_configured": true, 00:28:42.450 "data_offset": 2048, 00:28:42.450 "data_size": 63488 00:28:42.450 }, 00:28:42.450 { 00:28:42.450 "name": "BaseBdev3", 00:28:42.450 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:42.450 "is_configured": true, 00:28:42.450 "data_offset": 2048, 00:28:42.450 "data_size": 63488 00:28:42.450 }, 00:28:42.450 { 00:28:42.450 "name": "BaseBdev4", 00:28:42.451 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:42.451 "is_configured": true, 00:28:42.451 "data_offset": 2048, 00:28:42.451 "data_size": 63488 00:28:42.451 } 00:28:42.451 ] 00:28:42.451 }' 00:28:42.451 12:12:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:42.451 12:12:47 -- common/autotest_common.sh@10 -- # set +x 00:28:43.017 12:12:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:43.017 12:12:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:28:43.275 [2024-11-29 12:12:48.714220] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:43.275 12:12:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:28:43.275 12:12:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.275 12:12:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:43.534 12:12:48 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:28:43.534 12:12:48 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:28:43.534 12:12:48 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:28:43.534 12:12:48 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@12 -- # local i 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.534 12:12:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:43.792 [2024-11-29 12:12:49.222215] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:28:43.792 /dev/nbd0 00:28:43.792 12:12:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:43.792 12:12:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:43.792 12:12:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:43.792 12:12:49 -- common/autotest_common.sh@867 -- # local i 00:28:43.792 12:12:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:43.792 12:12:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:43.792 12:12:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:43.792 12:12:49 -- common/autotest_common.sh@871 -- # break 00:28:43.792 12:12:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:43.792 12:12:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:43.792 12:12:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:43.792 1+0 records in 00:28:43.792 1+0 records out 00:28:43.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671821 s, 6.1 MB/s 00:28:43.792 12:12:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.792 12:12:49 -- common/autotest_common.sh@884 -- # size=4096 00:28:43.792 12:12:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.792 12:12:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:43.792 12:12:49 -- common/autotest_common.sh@887 -- # return 0 00:28:43.792 12:12:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:43.792 12:12:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.792 12:12:49 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:28:43.792 12:12:49 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:28:43.792 12:12:49 -- bdev/bdev_raid.sh@582 -- # echo 192 00:28:43.792 12:12:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:28:44.728 496+0 records in 00:28:44.728 496+0 records out 00:28:44.728 97517568 bytes (98 MB, 93 MiB) copied, 0.600146 s, 162 MB/s 00:28:44.728 12:12:49 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:44.728 12:12:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.728 12:12:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:44.728 12:12:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.728 12:12:49 -- bdev/nbd_common.sh@51 -- # local i 00:28:44.728 12:12:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.728 12:12:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:44.728 [2024-11-29 12:12:50.190149] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@41 -- # break 00:28:44.728 12:12:50 -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.728 12:12:50 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:44.986 [2024-11-29 12:12:50.461831] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.986 12:12:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.553 12:12:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:45.553 "name": "raid_bdev1", 00:28:45.553 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:45.553 "strip_size_kb": 64, 00:28:45.553 "state": "online", 00:28:45.553 "raid_level": "raid5f", 00:28:45.553 "superblock": true, 00:28:45.553 "num_base_bdevs": 4, 00:28:45.553 "num_base_bdevs_discovered": 3, 00:28:45.553 "num_base_bdevs_operational": 3, 00:28:45.553 "base_bdevs_list": [ 00:28:45.553 { 00:28:45.553 "name": null, 00:28:45.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.553 "is_configured": false, 00:28:45.553 "data_offset": 2048, 00:28:45.553 "data_size": 63488 00:28:45.553 }, 00:28:45.553 { 00:28:45.553 "name": "BaseBdev2", 00:28:45.553 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:45.553 "is_configured": true, 00:28:45.553 "data_offset": 2048, 00:28:45.553 "data_size": 63488 00:28:45.553 }, 00:28:45.553 { 00:28:45.553 "name": "BaseBdev3", 00:28:45.553 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:45.553 "is_configured": true, 00:28:45.553 "data_offset": 2048, 00:28:45.553 "data_size": 63488 00:28:45.553 }, 00:28:45.553 { 00:28:45.553 "name": "BaseBdev4", 00:28:45.553 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:45.553 "is_configured": true, 00:28:45.553 "data_offset": 2048, 00:28:45.553 "data_size": 63488 00:28:45.553 } 00:28:45.553 ] 00:28:45.553 }' 00:28:45.553 12:12:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:45.553 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:46.119 12:12:51 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:46.119 [2024-11-29 12:12:51.626105] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:46.119 [2024-11-29 12:12:51.626519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:46.119 [2024-11-29 12:12:51.631069] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:28:46.119 [2024-11-29 12:12:51.634046] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:46.390 12:12:51 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.355 12:12:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.614 12:12:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:47.614 "name": "raid_bdev1", 00:28:47.614 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:47.614 "strip_size_kb": 64, 00:28:47.614 "state": "online", 00:28:47.614 "raid_level": "raid5f", 00:28:47.614 "superblock": true, 00:28:47.614 "num_base_bdevs": 4, 00:28:47.614 "num_base_bdevs_discovered": 4, 00:28:47.614 "num_base_bdevs_operational": 4, 00:28:47.614 "process": { 00:28:47.614 "type": "rebuild", 00:28:47.614 "target": "spare", 00:28:47.614 "progress": { 00:28:47.614 "blocks": 23040, 00:28:47.614 "percent": 12 00:28:47.614 } 00:28:47.614 }, 00:28:47.614 "base_bdevs_list": [ 00:28:47.614 { 00:28:47.614 "name": "spare", 00:28:47.614 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:47.614 "is_configured": true, 00:28:47.614 "data_offset": 2048, 00:28:47.614 "data_size": 63488 00:28:47.614 }, 00:28:47.614 { 00:28:47.614 "name": "BaseBdev2", 00:28:47.614 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:47.614 "is_configured": true, 00:28:47.614 "data_offset": 2048, 00:28:47.614 "data_size": 63488 00:28:47.614 }, 00:28:47.614 { 00:28:47.614 "name": "BaseBdev3", 00:28:47.614 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:47.614 "is_configured": true, 00:28:47.614 "data_offset": 2048, 00:28:47.614 "data_size": 63488 00:28:47.614 }, 00:28:47.614 { 00:28:47.614 "name": "BaseBdev4", 00:28:47.614 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:47.614 "is_configured": true, 00:28:47.614 "data_offset": 2048, 00:28:47.614 "data_size": 63488 00:28:47.614 } 00:28:47.614 ] 00:28:47.614 }' 00:28:47.614 12:12:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:47.614 12:12:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:47.614 12:12:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:47.614 12:12:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:47.614 12:12:53 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:47.872 [2024-11-29 12:12:53.244619] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:47.872 [2024-11-29 12:12:53.250561] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:47.872 [2024-11-29 12:12:53.250850] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.872 12:12:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.130 12:12:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:48.130 "name": "raid_bdev1", 00:28:48.130 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:48.130 "strip_size_kb": 64, 00:28:48.130 "state": "online", 00:28:48.130 "raid_level": "raid5f", 00:28:48.130 "superblock": true, 00:28:48.130 "num_base_bdevs": 4, 00:28:48.130 "num_base_bdevs_discovered": 3, 00:28:48.130 "num_base_bdevs_operational": 3, 00:28:48.130 "base_bdevs_list": [ 00:28:48.130 { 00:28:48.130 "name": null, 00:28:48.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.130 "is_configured": false, 00:28:48.130 "data_offset": 2048, 00:28:48.130 "data_size": 63488 00:28:48.130 }, 00:28:48.130 { 00:28:48.130 "name": "BaseBdev2", 00:28:48.130 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:48.130 "is_configured": true, 00:28:48.130 "data_offset": 2048, 00:28:48.130 "data_size": 63488 00:28:48.130 }, 00:28:48.130 { 00:28:48.130 "name": "BaseBdev3", 00:28:48.130 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:48.130 "is_configured": true, 00:28:48.130 "data_offset": 2048, 00:28:48.130 "data_size": 63488 00:28:48.130 }, 00:28:48.130 { 00:28:48.130 "name": "BaseBdev4", 00:28:48.130 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:48.130 "is_configured": true, 00:28:48.130 "data_offset": 2048, 00:28:48.130 "data_size": 63488 00:28:48.131 } 00:28:48.131 ] 00:28:48.131 }' 00:28:48.131 12:12:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:48.131 12:12:53 -- common/autotest_common.sh@10 -- # set +x 00:28:48.696 12:12:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:48.696 12:12:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:48.697 12:12:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:48.697 12:12:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:48.697 12:12:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:48.697 12:12:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.697 12:12:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.955 12:12:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:48.955 "name": "raid_bdev1", 00:28:48.955 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:48.955 "strip_size_kb": 64, 00:28:48.955 "state": "online", 00:28:48.955 "raid_level": "raid5f", 00:28:48.955 "superblock": true, 00:28:48.955 "num_base_bdevs": 4, 00:28:48.955 "num_base_bdevs_discovered": 3, 00:28:48.955 "num_base_bdevs_operational": 3, 00:28:48.955 "base_bdevs_list": [ 00:28:48.955 { 00:28:48.955 "name": null, 00:28:48.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.955 "is_configured": false, 00:28:48.955 "data_offset": 2048, 00:28:48.955 "data_size": 63488 00:28:48.955 }, 00:28:48.955 { 00:28:48.955 "name": "BaseBdev2", 00:28:48.955 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:48.955 "is_configured": true, 00:28:48.955 "data_offset": 2048, 00:28:48.955 "data_size": 63488 00:28:48.955 }, 00:28:48.955 { 00:28:48.955 "name": "BaseBdev3", 00:28:48.955 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:48.955 "is_configured": true, 00:28:48.955 "data_offset": 2048, 00:28:48.955 "data_size": 63488 00:28:48.955 }, 00:28:48.955 { 00:28:48.955 "name": "BaseBdev4", 00:28:48.955 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:48.955 "is_configured": true, 00:28:48.955 "data_offset": 2048, 00:28:48.955 "data_size": 63488 00:28:48.955 } 00:28:48.955 ] 00:28:48.955 }' 00:28:48.955 12:12:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:48.955 12:12:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:48.955 12:12:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:49.214 12:12:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:49.214 12:12:54 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:49.474 [2024-11-29 12:12:54.777559] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:49.474 [2024-11-29 12:12:54.777940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.474 [2024-11-29 12:12:54.782552] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:28:49.474 [2024-11-29 12:12:54.785488] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:49.474 12:12:54 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.415 12:12:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:50.673 "name": "raid_bdev1", 00:28:50.673 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:50.673 "strip_size_kb": 64, 00:28:50.673 "state": "online", 00:28:50.673 "raid_level": "raid5f", 00:28:50.673 "superblock": true, 00:28:50.673 "num_base_bdevs": 4, 00:28:50.673 "num_base_bdevs_discovered": 4, 00:28:50.673 "num_base_bdevs_operational": 4, 00:28:50.673 "process": { 00:28:50.673 "type": "rebuild", 00:28:50.673 "target": "spare", 00:28:50.673 "progress": { 00:28:50.673 "blocks": 23040, 00:28:50.673 "percent": 12 00:28:50.673 } 00:28:50.673 }, 00:28:50.673 "base_bdevs_list": [ 00:28:50.673 { 00:28:50.673 "name": "spare", 00:28:50.673 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:50.673 "is_configured": true, 00:28:50.673 "data_offset": 2048, 00:28:50.673 "data_size": 63488 00:28:50.673 }, 00:28:50.673 { 00:28:50.673 "name": "BaseBdev2", 00:28:50.673 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:50.673 "is_configured": true, 00:28:50.673 "data_offset": 2048, 00:28:50.673 "data_size": 63488 00:28:50.673 }, 00:28:50.673 { 00:28:50.673 "name": "BaseBdev3", 00:28:50.673 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:50.673 "is_configured": true, 00:28:50.673 "data_offset": 2048, 00:28:50.673 "data_size": 63488 00:28:50.673 }, 00:28:50.673 { 00:28:50.673 "name": "BaseBdev4", 00:28:50.673 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:50.673 "is_configured": true, 00:28:50.673 "data_offset": 2048, 00:28:50.673 "data_size": 63488 00:28:50.673 } 00:28:50.673 ] 00:28:50.673 }' 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:28:50.673 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@657 -- # local timeout=750 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.673 12:12:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.932 12:12:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:50.932 "name": "raid_bdev1", 00:28:50.932 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:50.932 "strip_size_kb": 64, 00:28:50.932 "state": "online", 00:28:50.932 "raid_level": "raid5f", 00:28:50.932 "superblock": true, 00:28:50.932 "num_base_bdevs": 4, 00:28:50.932 "num_base_bdevs_discovered": 4, 00:28:50.932 "num_base_bdevs_operational": 4, 00:28:50.932 "process": { 00:28:50.932 "type": "rebuild", 00:28:50.932 "target": "spare", 00:28:50.932 "progress": { 00:28:50.932 "blocks": 30720, 00:28:50.932 "percent": 16 00:28:50.932 } 00:28:50.932 }, 00:28:50.932 "base_bdevs_list": [ 00:28:50.932 { 00:28:50.932 "name": "spare", 00:28:50.932 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:50.932 "is_configured": true, 00:28:50.932 "data_offset": 2048, 00:28:50.932 "data_size": 63488 00:28:50.932 }, 00:28:50.932 { 00:28:50.932 "name": "BaseBdev2", 00:28:50.932 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:50.932 "is_configured": true, 00:28:50.932 "data_offset": 2048, 00:28:50.932 "data_size": 63488 00:28:50.932 }, 00:28:50.932 { 00:28:50.932 "name": "BaseBdev3", 00:28:50.932 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:50.932 "is_configured": true, 00:28:50.932 "data_offset": 2048, 00:28:50.932 "data_size": 63488 00:28:50.932 }, 00:28:50.932 { 00:28:50.932 "name": "BaseBdev4", 00:28:50.932 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:50.932 "is_configured": true, 00:28:50.932 "data_offset": 2048, 00:28:50.932 "data_size": 63488 00:28:50.932 } 00:28:50.932 ] 00:28:50.932 }' 00:28:50.932 12:12:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:51.191 12:12:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:51.191 12:12:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:51.191 12:12:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:51.191 12:12:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.163 12:12:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.422 12:12:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:52.422 "name": "raid_bdev1", 00:28:52.422 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:52.422 "strip_size_kb": 64, 00:28:52.422 "state": "online", 00:28:52.422 "raid_level": "raid5f", 00:28:52.422 "superblock": true, 00:28:52.422 "num_base_bdevs": 4, 00:28:52.422 "num_base_bdevs_discovered": 4, 00:28:52.422 "num_base_bdevs_operational": 4, 00:28:52.422 "process": { 00:28:52.422 "type": "rebuild", 00:28:52.422 "target": "spare", 00:28:52.422 "progress": { 00:28:52.422 "blocks": 55680, 00:28:52.422 "percent": 29 00:28:52.422 } 00:28:52.422 }, 00:28:52.422 "base_bdevs_list": [ 00:28:52.422 { 00:28:52.422 "name": "spare", 00:28:52.422 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 }, 00:28:52.422 { 00:28:52.422 "name": "BaseBdev2", 00:28:52.422 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 }, 00:28:52.422 { 00:28:52.422 "name": "BaseBdev3", 00:28:52.422 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 }, 00:28:52.422 { 00:28:52.422 "name": "BaseBdev4", 00:28:52.422 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 } 00:28:52.422 ] 00:28:52.422 }' 00:28:52.422 12:12:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:52.422 12:12:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.422 12:12:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:52.422 12:12:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.422 12:12:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.797 12:12:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.797 12:12:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:53.797 "name": "raid_bdev1", 00:28:53.797 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:53.797 "strip_size_kb": 64, 00:28:53.797 "state": "online", 00:28:53.797 "raid_level": "raid5f", 00:28:53.797 "superblock": true, 00:28:53.797 "num_base_bdevs": 4, 00:28:53.797 "num_base_bdevs_discovered": 4, 00:28:53.797 "num_base_bdevs_operational": 4, 00:28:53.797 "process": { 00:28:53.797 "type": "rebuild", 00:28:53.797 "target": "spare", 00:28:53.797 "progress": { 00:28:53.797 "blocks": 80640, 00:28:53.797 "percent": 42 00:28:53.797 } 00:28:53.797 }, 00:28:53.797 "base_bdevs_list": [ 00:28:53.797 { 00:28:53.797 "name": "spare", 00:28:53.797 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:53.797 "is_configured": true, 00:28:53.797 "data_offset": 2048, 00:28:53.797 "data_size": 63488 00:28:53.797 }, 00:28:53.797 { 00:28:53.797 "name": "BaseBdev2", 00:28:53.797 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:53.797 "is_configured": true, 00:28:53.797 "data_offset": 2048, 00:28:53.797 "data_size": 63488 00:28:53.797 }, 00:28:53.797 { 00:28:53.797 "name": "BaseBdev3", 00:28:53.797 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:53.797 "is_configured": true, 00:28:53.797 "data_offset": 2048, 00:28:53.797 "data_size": 63488 00:28:53.797 }, 00:28:53.797 { 00:28:53.797 "name": "BaseBdev4", 00:28:53.797 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:53.797 "is_configured": true, 00:28:53.797 "data_offset": 2048, 00:28:53.797 "data_size": 63488 00:28:53.797 } 00:28:53.797 ] 00:28:53.797 }' 00:28:53.797 12:12:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:53.797 12:12:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.797 12:12:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:53.797 12:12:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.797 12:12:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.734 12:13:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.299 12:13:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:55.299 "name": "raid_bdev1", 00:28:55.299 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:55.299 "strip_size_kb": 64, 00:28:55.299 "state": "online", 00:28:55.299 "raid_level": "raid5f", 00:28:55.299 "superblock": true, 00:28:55.299 "num_base_bdevs": 4, 00:28:55.299 "num_base_bdevs_discovered": 4, 00:28:55.299 "num_base_bdevs_operational": 4, 00:28:55.299 "process": { 00:28:55.299 "type": "rebuild", 00:28:55.299 "target": "spare", 00:28:55.299 "progress": { 00:28:55.299 "blocks": 107520, 00:28:55.299 "percent": 56 00:28:55.299 } 00:28:55.299 }, 00:28:55.299 "base_bdevs_list": [ 00:28:55.299 { 00:28:55.299 "name": "spare", 00:28:55.299 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:55.299 "is_configured": true, 00:28:55.299 "data_offset": 2048, 00:28:55.299 "data_size": 63488 00:28:55.299 }, 00:28:55.299 { 00:28:55.299 "name": "BaseBdev2", 00:28:55.299 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:55.299 "is_configured": true, 00:28:55.299 "data_offset": 2048, 00:28:55.299 "data_size": 63488 00:28:55.299 }, 00:28:55.299 { 00:28:55.299 "name": "BaseBdev3", 00:28:55.299 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:55.299 "is_configured": true, 00:28:55.299 "data_offset": 2048, 00:28:55.299 "data_size": 63488 00:28:55.299 }, 00:28:55.299 { 00:28:55.299 "name": "BaseBdev4", 00:28:55.299 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:55.299 "is_configured": true, 00:28:55.299 "data_offset": 2048, 00:28:55.299 "data_size": 63488 00:28:55.299 } 00:28:55.299 ] 00:28:55.299 }' 00:28:55.299 12:13:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:55.299 12:13:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:55.300 12:13:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:55.300 12:13:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:55.300 12:13:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.232 12:13:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.489 12:13:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:56.489 "name": "raid_bdev1", 00:28:56.489 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:56.489 "strip_size_kb": 64, 00:28:56.489 "state": "online", 00:28:56.489 "raid_level": "raid5f", 00:28:56.489 "superblock": true, 00:28:56.489 "num_base_bdevs": 4, 00:28:56.489 "num_base_bdevs_discovered": 4, 00:28:56.489 "num_base_bdevs_operational": 4, 00:28:56.489 "process": { 00:28:56.489 "type": "rebuild", 00:28:56.489 "target": "spare", 00:28:56.489 "progress": { 00:28:56.489 "blocks": 134400, 00:28:56.489 "percent": 70 00:28:56.489 } 00:28:56.489 }, 00:28:56.489 "base_bdevs_list": [ 00:28:56.489 { 00:28:56.489 "name": "spare", 00:28:56.489 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:56.489 "is_configured": true, 00:28:56.489 "data_offset": 2048, 00:28:56.489 "data_size": 63488 00:28:56.489 }, 00:28:56.489 { 00:28:56.489 "name": "BaseBdev2", 00:28:56.489 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:56.489 "is_configured": true, 00:28:56.489 "data_offset": 2048, 00:28:56.489 "data_size": 63488 00:28:56.489 }, 00:28:56.490 { 00:28:56.490 "name": "BaseBdev3", 00:28:56.490 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:56.490 "is_configured": true, 00:28:56.490 "data_offset": 2048, 00:28:56.490 "data_size": 63488 00:28:56.490 }, 00:28:56.490 { 00:28:56.490 "name": "BaseBdev4", 00:28:56.490 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:56.490 "is_configured": true, 00:28:56.490 "data_offset": 2048, 00:28:56.490 "data_size": 63488 00:28:56.490 } 00:28:56.490 ] 00:28:56.490 }' 00:28:56.490 12:13:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:56.490 12:13:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:56.490 12:13:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:56.490 12:13:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:56.490 12:13:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.860 12:13:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.860 12:13:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:57.860 "name": "raid_bdev1", 00:28:57.860 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:57.860 "strip_size_kb": 64, 00:28:57.860 "state": "online", 00:28:57.860 "raid_level": "raid5f", 00:28:57.860 "superblock": true, 00:28:57.860 "num_base_bdevs": 4, 00:28:57.860 "num_base_bdevs_discovered": 4, 00:28:57.860 "num_base_bdevs_operational": 4, 00:28:57.860 "process": { 00:28:57.860 "type": "rebuild", 00:28:57.860 "target": "spare", 00:28:57.860 "progress": { 00:28:57.860 "blocks": 159360, 00:28:57.860 "percent": 83 00:28:57.860 } 00:28:57.860 }, 00:28:57.860 "base_bdevs_list": [ 00:28:57.860 { 00:28:57.860 "name": "spare", 00:28:57.860 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:57.860 "is_configured": true, 00:28:57.860 "data_offset": 2048, 00:28:57.860 "data_size": 63488 00:28:57.860 }, 00:28:57.860 { 00:28:57.860 "name": "BaseBdev2", 00:28:57.860 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:57.860 "is_configured": true, 00:28:57.860 "data_offset": 2048, 00:28:57.860 "data_size": 63488 00:28:57.860 }, 00:28:57.860 { 00:28:57.860 "name": "BaseBdev3", 00:28:57.860 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:57.860 "is_configured": true, 00:28:57.860 "data_offset": 2048, 00:28:57.860 "data_size": 63488 00:28:57.860 }, 00:28:57.860 { 00:28:57.860 "name": "BaseBdev4", 00:28:57.860 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:57.860 "is_configured": true, 00:28:57.860 "data_offset": 2048, 00:28:57.860 "data_size": 63488 00:28:57.860 } 00:28:57.860 ] 00:28:57.860 }' 00:28:57.860 12:13:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:57.860 12:13:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:57.860 12:13:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:57.860 12:13:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:57.860 12:13:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:59.231 "name": "raid_bdev1", 00:28:59.231 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:28:59.231 "strip_size_kb": 64, 00:28:59.231 "state": "online", 00:28:59.231 "raid_level": "raid5f", 00:28:59.231 "superblock": true, 00:28:59.231 "num_base_bdevs": 4, 00:28:59.231 "num_base_bdevs_discovered": 4, 00:28:59.231 "num_base_bdevs_operational": 4, 00:28:59.231 "process": { 00:28:59.231 "type": "rebuild", 00:28:59.231 "target": "spare", 00:28:59.231 "progress": { 00:28:59.231 "blocks": 186240, 00:28:59.231 "percent": 97 00:28:59.231 } 00:28:59.231 }, 00:28:59.231 "base_bdevs_list": [ 00:28:59.231 { 00:28:59.231 "name": "spare", 00:28:59.231 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:28:59.231 "is_configured": true, 00:28:59.231 "data_offset": 2048, 00:28:59.231 "data_size": 63488 00:28:59.231 }, 00:28:59.231 { 00:28:59.231 "name": "BaseBdev2", 00:28:59.231 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:28:59.231 "is_configured": true, 00:28:59.231 "data_offset": 2048, 00:28:59.231 "data_size": 63488 00:28:59.231 }, 00:28:59.231 { 00:28:59.231 "name": "BaseBdev3", 00:28:59.231 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:28:59.231 "is_configured": true, 00:28:59.231 "data_offset": 2048, 00:28:59.231 "data_size": 63488 00:28:59.231 }, 00:28:59.231 { 00:28:59.231 "name": "BaseBdev4", 00:28:59.231 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:28:59.231 "is_configured": true, 00:28:59.231 "data_offset": 2048, 00:28:59.231 "data_size": 63488 00:28:59.231 } 00:28:59.231 ] 00:28:59.231 }' 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.231 12:13:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:59.489 [2024-11-29 12:13:04.877949] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:59.489 [2024-11-29 12:13:04.878300] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:59.489 [2024-11-29 12:13:04.878678] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.423 12:13:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.681 12:13:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:00.681 "name": "raid_bdev1", 00:29:00.681 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:29:00.681 "strip_size_kb": 64, 00:29:00.681 "state": "online", 00:29:00.681 "raid_level": "raid5f", 00:29:00.681 "superblock": true, 00:29:00.681 "num_base_bdevs": 4, 00:29:00.681 "num_base_bdevs_discovered": 4, 00:29:00.681 "num_base_bdevs_operational": 4, 00:29:00.681 "base_bdevs_list": [ 00:29:00.681 { 00:29:00.681 "name": "spare", 00:29:00.681 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:29:00.681 "is_configured": true, 00:29:00.681 "data_offset": 2048, 00:29:00.681 "data_size": 63488 00:29:00.681 }, 00:29:00.681 { 00:29:00.681 "name": "BaseBdev2", 00:29:00.681 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:29:00.681 "is_configured": true, 00:29:00.681 "data_offset": 2048, 00:29:00.681 "data_size": 63488 00:29:00.681 }, 00:29:00.681 { 00:29:00.681 "name": "BaseBdev3", 00:29:00.681 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:29:00.681 "is_configured": true, 00:29:00.681 "data_offset": 2048, 00:29:00.681 "data_size": 63488 00:29:00.681 }, 00:29:00.681 { 00:29:00.681 "name": "BaseBdev4", 00:29:00.681 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:29:00.681 "is_configured": true, 00:29:00.681 "data_offset": 2048, 00:29:00.681 "data_size": 63488 00:29:00.681 } 00:29:00.681 ] 00:29:00.681 }' 00:29:00.681 12:13:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@660 -- # break 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:00.681 12:13:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:00.682 12:13:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:00.682 12:13:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.682 12:13:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.940 12:13:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:00.940 "name": "raid_bdev1", 00:29:00.940 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:29:00.940 "strip_size_kb": 64, 00:29:00.940 "state": "online", 00:29:00.940 "raid_level": "raid5f", 00:29:00.940 "superblock": true, 00:29:00.940 "num_base_bdevs": 4, 00:29:00.940 "num_base_bdevs_discovered": 4, 00:29:00.940 "num_base_bdevs_operational": 4, 00:29:00.940 "base_bdevs_list": [ 00:29:00.940 { 00:29:00.940 "name": "spare", 00:29:00.940 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:29:00.940 "is_configured": true, 00:29:00.940 "data_offset": 2048, 00:29:00.940 "data_size": 63488 00:29:00.940 }, 00:29:00.940 { 00:29:00.940 "name": "BaseBdev2", 00:29:00.940 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:29:00.940 "is_configured": true, 00:29:00.940 "data_offset": 2048, 00:29:00.940 "data_size": 63488 00:29:00.940 }, 00:29:00.940 { 00:29:00.940 "name": "BaseBdev3", 00:29:00.940 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:29:00.940 "is_configured": true, 00:29:00.940 "data_offset": 2048, 00:29:00.940 "data_size": 63488 00:29:00.940 }, 00:29:00.940 { 00:29:00.940 "name": "BaseBdev4", 00:29:00.940 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:29:00.940 "is_configured": true, 00:29:00.940 "data_offset": 2048, 00:29:00.940 "data_size": 63488 00:29:00.940 } 00:29:00.940 ] 00:29:00.940 }' 00:29:00.940 12:13:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:00.940 12:13:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:00.940 12:13:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.199 12:13:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.457 12:13:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:01.457 "name": "raid_bdev1", 00:29:01.457 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:29:01.457 "strip_size_kb": 64, 00:29:01.457 "state": "online", 00:29:01.457 "raid_level": "raid5f", 00:29:01.457 "superblock": true, 00:29:01.457 "num_base_bdevs": 4, 00:29:01.457 "num_base_bdevs_discovered": 4, 00:29:01.457 "num_base_bdevs_operational": 4, 00:29:01.457 "base_bdevs_list": [ 00:29:01.457 { 00:29:01.457 "name": "spare", 00:29:01.457 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:29:01.457 "is_configured": true, 00:29:01.457 "data_offset": 2048, 00:29:01.457 "data_size": 63488 00:29:01.457 }, 00:29:01.457 { 00:29:01.457 "name": "BaseBdev2", 00:29:01.457 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:29:01.457 "is_configured": true, 00:29:01.457 "data_offset": 2048, 00:29:01.457 "data_size": 63488 00:29:01.457 }, 00:29:01.457 { 00:29:01.457 "name": "BaseBdev3", 00:29:01.457 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:29:01.457 "is_configured": true, 00:29:01.457 "data_offset": 2048, 00:29:01.457 "data_size": 63488 00:29:01.457 }, 00:29:01.457 { 00:29:01.457 "name": "BaseBdev4", 00:29:01.457 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:29:01.457 "is_configured": true, 00:29:01.457 "data_offset": 2048, 00:29:01.457 "data_size": 63488 00:29:01.457 } 00:29:01.457 ] 00:29:01.457 }' 00:29:01.457 12:13:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:01.457 12:13:06 -- common/autotest_common.sh@10 -- # set +x 00:29:02.023 12:13:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:02.281 [2024-11-29 12:13:07.625684] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:02.281 [2024-11-29 12:13:07.625996] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:02.281 [2024-11-29 12:13:07.626243] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:02.281 [2024-11-29 12:13:07.626487] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:02.281 [2024-11-29 12:13:07.626626] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:02.281 12:13:07 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.281 12:13:07 -- bdev/bdev_raid.sh@671 -- # jq length 00:29:02.540 12:13:07 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:29:02.540 12:13:07 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:29:02.540 12:13:07 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@12 -- # local i 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:02.540 12:13:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:02.800 /dev/nbd0 00:29:02.800 12:13:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:02.800 12:13:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:02.800 12:13:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:02.800 12:13:08 -- common/autotest_common.sh@867 -- # local i 00:29:02.800 12:13:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:02.800 12:13:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:02.800 12:13:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:02.800 12:13:08 -- common/autotest_common.sh@871 -- # break 00:29:02.800 12:13:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:02.800 12:13:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:02.800 12:13:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:02.800 1+0 records in 00:29:02.800 1+0 records out 00:29:02.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584671 s, 7.0 MB/s 00:29:02.800 12:13:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.800 12:13:08 -- common/autotest_common.sh@884 -- # size=4096 00:29:02.800 12:13:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.800 12:13:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:02.800 12:13:08 -- common/autotest_common.sh@887 -- # return 0 00:29:02.800 12:13:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:02.800 12:13:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:02.800 12:13:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:03.060 /dev/nbd1 00:29:03.060 12:13:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:03.060 12:13:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:03.060 12:13:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:03.060 12:13:08 -- common/autotest_common.sh@867 -- # local i 00:29:03.060 12:13:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:03.060 12:13:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:03.060 12:13:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:03.060 12:13:08 -- common/autotest_common.sh@871 -- # break 00:29:03.060 12:13:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:03.060 12:13:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:03.060 12:13:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.319 1+0 records in 00:29:03.319 1+0 records out 00:29:03.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576805 s, 7.1 MB/s 00:29:03.319 12:13:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.319 12:13:08 -- common/autotest_common.sh@884 -- # size=4096 00:29:03.319 12:13:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.319 12:13:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:03.319 12:13:08 -- common/autotest_common.sh@887 -- # return 0 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:03.319 12:13:08 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:03.319 12:13:08 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@51 -- # local i 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.319 12:13:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@41 -- # break 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@45 -- # return 0 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.578 12:13:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@41 -- # break 00:29:03.836 12:13:09 -- bdev/nbd_common.sh@45 -- # return 0 00:29:03.836 12:13:09 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:29:03.836 12:13:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:03.836 12:13:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:29:03.836 12:13:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:04.094 12:13:09 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:04.353 [2024-11-29 12:13:09.691885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:04.353 [2024-11-29 12:13:09.692270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.353 [2024-11-29 12:13:09.692379] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:04.353 [2024-11-29 12:13:09.692563] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.353 [2024-11-29 12:13:09.695237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.353 [2024-11-29 12:13:09.695433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:04.353 [2024-11-29 12:13:09.695655] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:04.353 [2024-11-29 12:13:09.695830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:04.353 BaseBdev1 00:29:04.353 12:13:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:04.353 12:13:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:29:04.353 12:13:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:29:04.611 12:13:09 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:04.890 [2024-11-29 12:13:10.164282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:04.890 [2024-11-29 12:13:10.164658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.890 [2024-11-29 12:13:10.164748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:04.890 [2024-11-29 12:13:10.164888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.890 [2024-11-29 12:13:10.165392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.890 [2024-11-29 12:13:10.165576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:04.890 [2024-11-29 12:13:10.165781] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:29:04.890 [2024-11-29 12:13:10.165899] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:29:04.890 [2024-11-29 12:13:10.165998] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:04.890 [2024-11-29 12:13:10.166078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:29:04.890 [2024-11-29 12:13:10.166328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:04.890 BaseBdev2 00:29:04.890 12:13:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:04.890 12:13:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:29:04.890 12:13:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:29:05.148 12:13:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:05.406 [2024-11-29 12:13:10.708392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:05.406 [2024-11-29 12:13:10.708767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.406 [2024-11-29 12:13:10.708850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:05.406 [2024-11-29 12:13:10.708981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.406 [2024-11-29 12:13:10.709501] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.406 [2024-11-29 12:13:10.709684] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:05.406 [2024-11-29 12:13:10.709886] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:29:05.406 [2024-11-29 12:13:10.710020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:05.406 BaseBdev3 00:29:05.406 12:13:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:05.406 12:13:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:29:05.406 12:13:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:29:05.664 12:13:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:05.664 [2024-11-29 12:13:11.172512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:05.664 [2024-11-29 12:13:11.172774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.664 [2024-11-29 12:13:11.172939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:05.664 [2024-11-29 12:13:11.173079] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.664 [2024-11-29 12:13:11.173660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.664 [2024-11-29 12:13:11.173837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:05.664 [2024-11-29 12:13:11.174031] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:29:05.664 [2024-11-29 12:13:11.174181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:05.664 BaseBdev4 00:29:05.922 12:13:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:06.180 [2024-11-29 12:13:11.664582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:06.180 [2024-11-29 12:13:11.664969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:06.180 [2024-11-29 12:13:11.665051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:29:06.180 [2024-11-29 12:13:11.665230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:06.180 [2024-11-29 12:13:11.665783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:06.180 [2024-11-29 12:13:11.665963] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:06.180 [2024-11-29 12:13:11.666199] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:29:06.180 [2024-11-29 12:13:11.666371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:06.180 spare 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.180 12:13:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.439 [2024-11-29 12:13:11.766587] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:29:06.439 [2024-11-29 12:13:11.766897] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:06.439 [2024-11-29 12:13:11.767170] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045ea0 00:29:06.439 [2024-11-29 12:13:11.768143] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:29:06.439 [2024-11-29 12:13:11.768287] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:29:06.439 [2024-11-29 12:13:11.768586] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.698 12:13:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:06.698 "name": "raid_bdev1", 00:29:06.698 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:29:06.698 "strip_size_kb": 64, 00:29:06.698 "state": "online", 00:29:06.698 "raid_level": "raid5f", 00:29:06.698 "superblock": true, 00:29:06.698 "num_base_bdevs": 4, 00:29:06.698 "num_base_bdevs_discovered": 4, 00:29:06.698 "num_base_bdevs_operational": 4, 00:29:06.698 "base_bdevs_list": [ 00:29:06.698 { 00:29:06.698 "name": "spare", 00:29:06.698 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:29:06.698 "is_configured": true, 00:29:06.698 "data_offset": 2048, 00:29:06.698 "data_size": 63488 00:29:06.698 }, 00:29:06.698 { 00:29:06.698 "name": "BaseBdev2", 00:29:06.698 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:29:06.698 "is_configured": true, 00:29:06.698 "data_offset": 2048, 00:29:06.698 "data_size": 63488 00:29:06.698 }, 00:29:06.698 { 00:29:06.698 "name": "BaseBdev3", 00:29:06.698 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:29:06.698 "is_configured": true, 00:29:06.698 "data_offset": 2048, 00:29:06.698 "data_size": 63488 00:29:06.698 }, 00:29:06.698 { 00:29:06.698 "name": "BaseBdev4", 00:29:06.698 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:29:06.698 "is_configured": true, 00:29:06.698 "data_offset": 2048, 00:29:06.698 "data_size": 63488 00:29:06.698 } 00:29:06.698 ] 00:29:06.698 }' 00:29:06.698 12:13:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:06.698 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.264 12:13:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.522 12:13:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:07.522 "name": "raid_bdev1", 00:29:07.522 "uuid": "1da4953c-5c34-4e20-b2f0-98a1a0a80515", 00:29:07.522 "strip_size_kb": 64, 00:29:07.522 "state": "online", 00:29:07.522 "raid_level": "raid5f", 00:29:07.522 "superblock": true, 00:29:07.522 "num_base_bdevs": 4, 00:29:07.522 "num_base_bdevs_discovered": 4, 00:29:07.522 "num_base_bdevs_operational": 4, 00:29:07.522 "base_bdevs_list": [ 00:29:07.522 { 00:29:07.522 "name": "spare", 00:29:07.522 "uuid": "c8579376-ce30-5c8d-b712-8893d30863bf", 00:29:07.522 "is_configured": true, 00:29:07.522 "data_offset": 2048, 00:29:07.522 "data_size": 63488 00:29:07.522 }, 00:29:07.522 { 00:29:07.522 "name": "BaseBdev2", 00:29:07.522 "uuid": "0867ed46-cc1f-537e-8f35-2ee449989ffd", 00:29:07.522 "is_configured": true, 00:29:07.522 "data_offset": 2048, 00:29:07.522 "data_size": 63488 00:29:07.522 }, 00:29:07.522 { 00:29:07.522 "name": "BaseBdev3", 00:29:07.522 "uuid": "7c27722b-a792-5bab-8139-f74e4cd3177b", 00:29:07.522 "is_configured": true, 00:29:07.522 "data_offset": 2048, 00:29:07.522 "data_size": 63488 00:29:07.522 }, 00:29:07.522 { 00:29:07.522 "name": "BaseBdev4", 00:29:07.522 "uuid": "131dda56-e1f5-5061-ac4f-fa688bbcdfbb", 00:29:07.522 "is_configured": true, 00:29:07.522 "data_offset": 2048, 00:29:07.522 "data_size": 63488 00:29:07.522 } 00:29:07.522 ] 00:29:07.522 }' 00:29:07.522 12:13:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:07.522 12:13:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:07.522 12:13:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:07.780 12:13:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:07.780 12:13:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.780 12:13:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:08.038 12:13:13 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:29:08.038 12:13:13 -- bdev/bdev_raid.sh@709 -- # killprocess 143101 00:29:08.038 12:13:13 -- common/autotest_common.sh@936 -- # '[' -z 143101 ']' 00:29:08.038 12:13:13 -- common/autotest_common.sh@940 -- # kill -0 143101 00:29:08.038 12:13:13 -- common/autotest_common.sh@941 -- # uname 00:29:08.038 12:13:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.038 12:13:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143101 00:29:08.038 killing process with pid 143101 00:29:08.038 Received shutdown signal, test time was about 60.000000 seconds 00:29:08.038 00:29:08.038 Latency(us) 00:29:08.038 [2024-11-29T12:13:13.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.038 [2024-11-29T12:13:13.549Z] =================================================================================================================== 00:29:08.038 [2024-11-29T12:13:13.549Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:08.038 12:13:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:08.038 12:13:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:08.038 12:13:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143101' 00:29:08.038 12:13:13 -- common/autotest_common.sh@955 -- # kill 143101 00:29:08.038 [2024-11-29 12:13:13.370277] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:08.038 12:13:13 -- common/autotest_common.sh@960 -- # wait 143101 00:29:08.038 [2024-11-29 12:13:13.370399] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:08.038 [2024-11-29 12:13:13.370503] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:08.038 [2024-11-29 12:13:13.370516] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:29:08.038 [2024-11-29 12:13:13.430515] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:08.296 12:13:13 -- bdev/bdev_raid.sh@711 -- # return 0 00:29:08.297 00:29:08.297 real 0m30.259s 00:29:08.297 user 0m47.402s 00:29:08.297 sys 0m3.597s 00:29:08.297 12:13:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:08.297 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.297 ************************************ 00:29:08.297 END TEST raid5f_rebuild_test_sb 00:29:08.297 ************************************ 00:29:08.297 12:13:13 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:29:08.297 ************************************ 00:29:08.297 END TEST bdev_raid 00:29:08.297 ************************************ 00:29:08.297 00:29:08.297 real 12m17.678s 00:29:08.297 user 21m4.852s 00:29:08.297 sys 1m39.973s 00:29:08.297 12:13:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:08.297 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.297 12:13:13 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:29:08.297 12:13:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:08.297 12:13:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:08.297 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.297 ************************************ 00:29:08.297 START TEST bdevperf_config 00:29:08.297 ************************************ 00:29:08.297 12:13:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:29:08.554 * Looking for test storage... 00:29:08.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:29:08.554 12:13:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:08.554 12:13:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:08.554 12:13:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:08.554 12:13:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:08.554 12:13:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:08.554 12:13:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:08.554 12:13:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:08.554 12:13:13 -- scripts/common.sh@335 -- # IFS=.-: 00:29:08.554 12:13:13 -- scripts/common.sh@335 -- # read -ra ver1 00:29:08.554 12:13:13 -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.554 12:13:13 -- scripts/common.sh@336 -- # read -ra ver2 00:29:08.554 12:13:13 -- scripts/common.sh@337 -- # local 'op=<' 00:29:08.554 12:13:13 -- scripts/common.sh@339 -- # ver1_l=2 00:29:08.554 12:13:13 -- scripts/common.sh@340 -- # ver2_l=1 00:29:08.554 12:13:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:08.554 12:13:13 -- scripts/common.sh@343 -- # case "$op" in 00:29:08.554 12:13:13 -- scripts/common.sh@344 -- # : 1 00:29:08.554 12:13:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:08.554 12:13:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.554 12:13:13 -- scripts/common.sh@364 -- # decimal 1 00:29:08.554 12:13:13 -- scripts/common.sh@352 -- # local d=1 00:29:08.554 12:13:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.554 12:13:13 -- scripts/common.sh@354 -- # echo 1 00:29:08.554 12:13:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:08.554 12:13:13 -- scripts/common.sh@365 -- # decimal 2 00:29:08.554 12:13:13 -- scripts/common.sh@352 -- # local d=2 00:29:08.554 12:13:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.554 12:13:13 -- scripts/common.sh@354 -- # echo 2 00:29:08.554 12:13:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:08.554 12:13:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:08.554 12:13:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:08.554 12:13:13 -- scripts/common.sh@367 -- # return 0 00:29:08.554 12:13:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.554 12:13:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.554 --rc genhtml_branch_coverage=1 00:29:08.554 --rc genhtml_function_coverage=1 00:29:08.554 --rc genhtml_legend=1 00:29:08.554 --rc geninfo_all_blocks=1 00:29:08.554 --rc geninfo_unexecuted_blocks=1 00:29:08.554 00:29:08.554 ' 00:29:08.554 12:13:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.554 --rc genhtml_branch_coverage=1 00:29:08.554 --rc genhtml_function_coverage=1 00:29:08.554 --rc genhtml_legend=1 00:29:08.554 --rc geninfo_all_blocks=1 00:29:08.554 --rc geninfo_unexecuted_blocks=1 00:29:08.554 00:29:08.554 ' 00:29:08.554 12:13:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.554 --rc genhtml_branch_coverage=1 00:29:08.554 --rc genhtml_function_coverage=1 00:29:08.554 --rc genhtml_legend=1 00:29:08.554 --rc geninfo_all_blocks=1 00:29:08.554 --rc geninfo_unexecuted_blocks=1 00:29:08.554 00:29:08.554 ' 00:29:08.554 12:13:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.554 --rc genhtml_branch_coverage=1 00:29:08.554 --rc genhtml_function_coverage=1 00:29:08.554 --rc genhtml_legend=1 00:29:08.554 --rc geninfo_all_blocks=1 00:29:08.554 --rc geninfo_unexecuted_blocks=1 00:29:08.554 00:29:08.554 ' 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:29:08.554 12:13:13 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:29:08.554 12:13:13 -- bdevperf/common.sh@8 -- # local job_section=global 00:29:08.554 12:13:13 -- bdevperf/common.sh@9 -- # local rw=read 00:29:08.554 12:13:13 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:08.554 12:13:13 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:08.554 12:13:13 -- bdevperf/common.sh@13 -- # cat 00:29:08.554 12:13:13 -- bdevperf/common.sh@18 -- # job='[global]' 00:29:08.554 12:13:13 -- bdevperf/common.sh@19 -- # echo 00:29:08.554 00:29:08.554 12:13:13 -- bdevperf/common.sh@20 -- # cat 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@18 -- # create_job job0 00:29:08.554 12:13:13 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:08.554 12:13:13 -- bdevperf/common.sh@9 -- # local rw= 00:29:08.554 12:13:13 -- bdevperf/common.sh@10 -- # local filename= 00:29:08.554 12:13:13 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:08.554 12:13:13 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:08.554 12:13:13 -- bdevperf/common.sh@19 -- # echo 00:29:08.554 00:29:08.554 12:13:13 -- bdevperf/common.sh@20 -- # cat 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@19 -- # create_job job1 00:29:08.554 12:13:13 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:08.554 12:13:13 -- bdevperf/common.sh@9 -- # local rw= 00:29:08.554 12:13:13 -- bdevperf/common.sh@10 -- # local filename= 00:29:08.554 12:13:13 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:08.554 12:13:13 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:08.554 12:13:13 -- bdevperf/common.sh@19 -- # echo 00:29:08.554 00:29:08.554 12:13:13 -- bdevperf/common.sh@20 -- # cat 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@20 -- # create_job job2 00:29:08.554 12:13:13 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:08.554 12:13:13 -- bdevperf/common.sh@9 -- # local rw= 00:29:08.554 12:13:13 -- bdevperf/common.sh@10 -- # local filename= 00:29:08.554 12:13:13 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:08.554 12:13:13 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:08.554 12:13:13 -- bdevperf/common.sh@19 -- # echo 00:29:08.554 00:29:08.554 12:13:13 -- bdevperf/common.sh@20 -- # cat 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@21 -- # create_job job3 00:29:08.554 12:13:13 -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:08.554 12:13:13 -- bdevperf/common.sh@9 -- # local rw= 00:29:08.554 12:13:13 -- bdevperf/common.sh@10 -- # local filename= 00:29:08.554 12:13:13 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:08.554 12:13:13 -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:08.554 12:13:13 -- bdevperf/common.sh@19 -- # echo 00:29:08.554 00:29:08.554 12:13:13 -- bdevperf/common.sh@20 -- # cat 00:29:08.554 12:13:13 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:11.834 12:13:16 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-29 12:13:14.033511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:11.834 [2024-11-29 12:13:14.033747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143885 ] 00:29:11.834 Using job config with 4 jobs 00:29:11.834 [2024-11-29 12:13:14.178117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.834 [2024-11-29 12:13:14.293854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.834 cpumask for '\''job0'\'' is too big 00:29:11.834 cpumask for '\''job1'\'' is too big 00:29:11.834 cpumask for '\''job2'\'' is too big 00:29:11.834 cpumask for '\''job3'\'' is too big 00:29:11.834 Running I/O for 2 seconds... 00:29:11.834 00:29:11.834 Latency(us) 00:29:11.834 [2024-11-29T12:13:17.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.834 [2024-11-29T12:13:17.345Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.834 Malloc0 : 2.02 25353.00 24.76 0.00 0.00 10087.09 2278.87 19422.49 00:29:11.834 [2024-11-29T12:13:17.345Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.834 Malloc0 : 2.02 25331.26 24.74 0.00 0.00 10068.43 2278.87 17158.52 00:29:11.834 [2024-11-29T12:13:17.345Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.834 Malloc0 : 2.02 25309.79 24.72 0.00 0.00 10049.83 2234.18 14834.97 00:29:11.834 [2024-11-29T12:13:17.345Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.834 Malloc0 : 2.03 25383.21 24.79 0.00 0.00 9994.06 904.84 12690.15 00:29:11.834 [2024-11-29T12:13:17.345Z] =================================================================================================================== 00:29:11.834 [2024-11-29T12:13:17.345Z] Total : 101377.25 99.00 0.00 0.00 10049.78 904.84 19422.49' 00:29:11.834 12:13:16 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-29 12:13:14.033511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:11.834 [2024-11-29 12:13:14.033747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143885 ] 00:29:11.834 Using job config with 4 jobs 00:29:11.834 [2024-11-29 12:13:14.178117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.834 [2024-11-29 12:13:14.293854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.834 cpumask for '\''job0'\'' is too big 00:29:11.835 cpumask for '\''job1'\'' is too big 00:29:11.835 cpumask for '\''job2'\'' is too big 00:29:11.835 cpumask for '\''job3'\'' is too big 00:29:11.835 Running I/O for 2 seconds... 00:29:11.835 00:29:11.835 Latency(us) 00:29:11.835 [2024-11-29T12:13:17.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.02 25353.00 24.76 0.00 0.00 10087.09 2278.87 19422.49 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.02 25331.26 24.74 0.00 0.00 10068.43 2278.87 17158.52 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.02 25309.79 24.72 0.00 0.00 10049.83 2234.18 14834.97 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.03 25383.21 24.79 0.00 0.00 9994.06 904.84 12690.15 00:29:11.835 [2024-11-29T12:13:17.346Z] =================================================================================================================== 00:29:11.835 [2024-11-29T12:13:17.346Z] Total : 101377.25 99.00 0.00 0.00 10049.78 904.84 19422.49' 00:29:11.835 12:13:16 -- bdevperf/common.sh@32 -- # echo '[2024-11-29 12:13:14.033511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:11.835 [2024-11-29 12:13:14.033747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143885 ] 00:29:11.835 Using job config with 4 jobs 00:29:11.835 [2024-11-29 12:13:14.178117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.835 [2024-11-29 12:13:14.293854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.835 cpumask for '\''job0'\'' is too big 00:29:11.835 cpumask for '\''job1'\'' is too big 00:29:11.835 cpumask for '\''job2'\'' is too big 00:29:11.835 cpumask for '\''job3'\'' is too big 00:29:11.835 Running I/O for 2 seconds... 00:29:11.835 00:29:11.835 Latency(us) 00:29:11.835 [2024-11-29T12:13:17.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.02 25353.00 24.76 0.00 0.00 10087.09 2278.87 19422.49 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.02 25331.26 24.74 0.00 0.00 10068.43 2278.87 17158.52 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.02 25309.79 24.72 0.00 0.00 10049.83 2234.18 14834.97 00:29:11.835 [2024-11-29T12:13:17.346Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:11.835 Malloc0 : 2.03 25383.21 24.79 0.00 0.00 9994.06 904.84 12690.15 00:29:11.835 [2024-11-29T12:13:17.346Z] =================================================================================================================== 00:29:11.835 [2024-11-29T12:13:17.346Z] Total : 101377.25 99.00 0.00 0.00 10049.78 904.84 19422.49' 00:29:11.835 12:13:16 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:11.835 12:13:16 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:11.835 12:13:16 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:29:11.835 12:13:16 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:11.835 [2024-11-29 12:13:16.884984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:11.835 [2024-11-29 12:13:16.885642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143930 ] 00:29:11.835 [2024-11-29 12:13:17.042531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.835 [2024-11-29 12:13:17.151483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.835 cpumask for 'job0' is too big 00:29:11.835 cpumask for 'job1' is too big 00:29:11.835 cpumask for 'job2' is too big 00:29:11.835 cpumask for 'job3' is too big 00:29:14.364 12:13:19 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:29:14.364 Running I/O for 2 seconds... 00:29:14.364 00:29:14.364 Latency(us) 00:29:14.364 [2024-11-29T12:13:19.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.364 [2024-11-29T12:13:19.875Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:14.364 Malloc0 : 2.01 25442.66 24.85 0.00 0.00 10053.16 1891.61 15609.48 00:29:14.364 [2024-11-29T12:13:19.875Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:14.364 Malloc0 : 2.02 25453.24 24.86 0.00 0.00 10025.52 1846.92 13822.14 00:29:14.364 [2024-11-29T12:13:19.876Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:14.365 Malloc0 : 2.02 25431.85 24.84 0.00 0.00 10012.34 1854.37 12034.79 00:29:14.365 [2024-11-29T12:13:19.876Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:14.365 Malloc0 : 2.02 25411.05 24.82 0.00 0.00 9998.85 1891.61 10426.18 00:29:14.365 [2024-11-29T12:13:19.876Z] =================================================================================================================== 00:29:14.365 [2024-11-29T12:13:19.876Z] Total : 101738.80 99.35 0.00 0.00 10022.43 1846.92 15609.48' 00:29:14.365 12:13:19 -- bdevperf/test_config.sh@27 -- # cleanup 00:29:14.365 12:13:19 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:14.365 00:29:14.365 12:13:19 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:29:14.365 12:13:19 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:14.365 12:13:19 -- bdevperf/common.sh@9 -- # local rw=write 00:29:14.365 12:13:19 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:14.365 12:13:19 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:14.365 12:13:19 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:14.365 12:13:19 -- bdevperf/common.sh@19 -- # echo 00:29:14.365 12:13:19 -- bdevperf/common.sh@20 -- # cat 00:29:14.365 00:29:14.365 12:13:19 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:29:14.365 12:13:19 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:14.365 12:13:19 -- bdevperf/common.sh@9 -- # local rw=write 00:29:14.365 12:13:19 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:14.365 12:13:19 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:14.365 12:13:19 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:14.365 12:13:19 -- bdevperf/common.sh@19 -- # echo 00:29:14.365 12:13:19 -- bdevperf/common.sh@20 -- # cat 00:29:14.365 12:13:19 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:29:14.365 12:13:19 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:14.365 12:13:19 -- bdevperf/common.sh@9 -- # local rw=write 00:29:14.365 12:13:19 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:14.365 12:13:19 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:14.365 12:13:19 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:14.365 12:13:19 -- bdevperf/common.sh@19 -- # echo 00:29:14.365 00:29:14.365 12:13:19 -- bdevperf/common.sh@20 -- # cat 00:29:14.365 12:13:19 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-29 12:13:19.740221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:17.647 [2024-11-29 12:13:19.740520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143969 ] 00:29:17.647 Using job config with 3 jobs 00:29:17.647 [2024-11-29 12:13:19.896557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.647 [2024-11-29 12:13:20.008731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.647 cpumask for '\''job0'\'' is too big 00:29:17.647 cpumask for '\''job1'\'' is too big 00:29:17.647 cpumask for '\''job2'\'' is too big 00:29:17.647 Running I/O for 2 seconds... 00:29:17.647 00:29:17.647 Latency(us) 00:29:17.647 [2024-11-29T12:13:23.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.01 34545.14 33.74 0.00 0.00 7401.70 1899.05 11200.70 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.02 34556.79 33.75 0.00 0.00 7383.72 1832.03 9353.77 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.02 34528.00 33.72 0.00 0.00 7374.19 1817.13 7626.01 00:29:17.647 [2024-11-29T12:13:23.158Z] =================================================================================================================== 00:29:17.647 [2024-11-29T12:13:23.158Z] Total : 103629.93 101.20 0.00 0.00 7386.52 1817.13 11200.70' 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-29 12:13:19.740221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:17.647 [2024-11-29 12:13:19.740520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143969 ] 00:29:17.647 Using job config with 3 jobs 00:29:17.647 [2024-11-29 12:13:19.896557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.647 [2024-11-29 12:13:20.008731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.647 cpumask for '\''job0'\'' is too big 00:29:17.647 cpumask for '\''job1'\'' is too big 00:29:17.647 cpumask for '\''job2'\'' is too big 00:29:17.647 Running I/O for 2 seconds... 00:29:17.647 00:29:17.647 Latency(us) 00:29:17.647 [2024-11-29T12:13:23.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.01 34545.14 33.74 0.00 0.00 7401.70 1899.05 11200.70 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.02 34556.79 33.75 0.00 0.00 7383.72 1832.03 9353.77 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.02 34528.00 33.72 0.00 0.00 7374.19 1817.13 7626.01 00:29:17.647 [2024-11-29T12:13:23.158Z] =================================================================================================================== 00:29:17.647 [2024-11-29T12:13:23.158Z] Total : 103629.93 101.20 0.00 0.00 7386.52 1817.13 11200.70' 00:29:17.647 12:13:22 -- bdevperf/common.sh@32 -- # echo '[2024-11-29 12:13:19.740221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:17.647 [2024-11-29 12:13:19.740520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143969 ] 00:29:17.647 Using job config with 3 jobs 00:29:17.647 [2024-11-29 12:13:19.896557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.647 [2024-11-29 12:13:20.008731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.647 cpumask for '\''job0'\'' is too big 00:29:17.647 cpumask for '\''job1'\'' is too big 00:29:17.647 cpumask for '\''job2'\'' is too big 00:29:17.647 Running I/O for 2 seconds... 00:29:17.647 00:29:17.647 Latency(us) 00:29:17.647 [2024-11-29T12:13:23.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.01 34545.14 33.74 0.00 0.00 7401.70 1899.05 11200.70 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.02 34556.79 33.75 0.00 0.00 7383.72 1832.03 9353.77 00:29:17.647 [2024-11-29T12:13:23.158Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:17.647 Malloc0 : 2.02 34528.00 33.72 0.00 0.00 7374.19 1817.13 7626.01 00:29:17.647 [2024-11-29T12:13:23.158Z] =================================================================================================================== 00:29:17.647 [2024-11-29T12:13:23.158Z] Total : 103629.93 101.20 0.00 0.00 7386.52 1817.13 11200.70' 00:29:17.647 12:13:22 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:17.647 12:13:22 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@35 -- # cleanup 00:29:17.647 12:13:22 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:29:17.647 12:13:22 -- bdevperf/common.sh@8 -- # local job_section=global 00:29:17.647 12:13:22 -- bdevperf/common.sh@9 -- # local rw=rw 00:29:17.647 12:13:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:29:17.647 12:13:22 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:17.647 12:13:22 -- bdevperf/common.sh@13 -- # cat 00:29:17.647 12:13:22 -- bdevperf/common.sh@18 -- # job='[global]' 00:29:17.647 00:29:17.647 12:13:22 -- bdevperf/common.sh@19 -- # echo 00:29:17.647 12:13:22 -- bdevperf/common.sh@20 -- # cat 00:29:17.647 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@38 -- # create_job job0 00:29:17.647 12:13:22 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:17.647 12:13:22 -- bdevperf/common.sh@9 -- # local rw= 00:29:17.647 12:13:22 -- bdevperf/common.sh@10 -- # local filename= 00:29:17.647 12:13:22 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:17.647 12:13:22 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:17.647 12:13:22 -- bdevperf/common.sh@19 -- # echo 00:29:17.647 12:13:22 -- bdevperf/common.sh@20 -- # cat 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@39 -- # create_job job1 00:29:17.647 12:13:22 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:17.647 12:13:22 -- bdevperf/common.sh@9 -- # local rw= 00:29:17.647 12:13:22 -- bdevperf/common.sh@10 -- # local filename= 00:29:17.647 12:13:22 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:17.647 12:13:22 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:17.647 12:13:22 -- bdevperf/common.sh@19 -- # echo 00:29:17.647 00:29:17.647 12:13:22 -- bdevperf/common.sh@20 -- # cat 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@40 -- # create_job job2 00:29:17.647 12:13:22 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:17.647 12:13:22 -- bdevperf/common.sh@9 -- # local rw= 00:29:17.647 12:13:22 -- bdevperf/common.sh@10 -- # local filename= 00:29:17.647 12:13:22 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:17.647 12:13:22 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:17.647 12:13:22 -- bdevperf/common.sh@19 -- # echo 00:29:17.647 00:29:17.647 12:13:22 -- bdevperf/common.sh@20 -- # cat 00:29:17.647 12:13:22 -- bdevperf/test_config.sh@41 -- # create_job job3 00:29:17.647 12:13:22 -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:17.647 12:13:22 -- bdevperf/common.sh@9 -- # local rw= 00:29:17.647 12:13:22 -- bdevperf/common.sh@10 -- # local filename= 00:29:17.648 12:13:22 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:17.648 12:13:22 -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:17.648 12:13:22 -- bdevperf/common.sh@19 -- # echo 00:29:17.648 00:29:17.648 12:13:22 -- bdevperf/common.sh@20 -- # cat 00:29:17.648 12:13:22 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:20.178 12:13:25 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-29 12:13:22.582321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:20.178 [2024-11-29 12:13:22.582588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144020 ] 00:29:20.178 Using job config with 4 jobs 00:29:20.178 [2024-11-29 12:13:22.728355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.178 [2024-11-29 12:13:22.840420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.178 cpumask for '\''job0'\'' is too big 00:29:20.178 cpumask for '\''job1'\'' is too big 00:29:20.178 cpumask for '\''job2'\'' is too big 00:29:20.178 cpumask for '\''job3'\'' is too big 00:29:20.178 Running I/O for 2 seconds... 00:29:20.178 00:29:20.178 Latency(us) 00:29:20.178 [2024-11-29T12:13:25.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc0 : 2.02 12274.85 11.99 0.00 0.00 20835.94 3872.58 31933.91 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc1 : 2.04 12278.24 11.99 0.00 0.00 20810.03 4527.94 32172.22 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc0 : 2.05 12267.60 11.98 0.00 0.00 20755.25 3813.00 28240.06 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc1 : 2.05 12257.03 11.97 0.00 0.00 20752.14 4468.36 28359.21 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc0 : 2.05 12246.59 11.96 0.00 0.00 20699.51 3813.00 24546.21 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc1 : 2.05 12235.98 11.95 0.00 0.00 20699.75 4438.57 24665.37 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc0 : 2.05 12225.79 11.94 0.00 0.00 20644.47 3813.00 22639.71 00:29:20.178 [2024-11-29T12:13:25.689Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.178 Malloc1 : 2.05 12215.22 11.93 0.00 0.00 20642.52 4438.57 22520.55 00:29:20.178 [2024-11-29T12:13:25.689Z] =================================================================================================================== 00:29:20.178 [2024-11-29T12:13:25.689Z] Total : 98001.30 95.70 0.00 0.00 20729.81 3813.00 32172.22' 00:29:20.179 12:13:25 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-29 12:13:22.582321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:20.179 [2024-11-29 12:13:22.582588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144020 ] 00:29:20.179 Using job config with 4 jobs 00:29:20.179 [2024-11-29 12:13:22.728355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.179 [2024-11-29 12:13:22.840420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.179 cpumask for '\''job0'\'' is too big 00:29:20.179 cpumask for '\''job1'\'' is too big 00:29:20.179 cpumask for '\''job2'\'' is too big 00:29:20.179 cpumask for '\''job3'\'' is too big 00:29:20.179 Running I/O for 2 seconds... 00:29:20.179 00:29:20.179 Latency(us) 00:29:20.179 [2024-11-29T12:13:25.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.02 12274.85 11.99 0.00 0.00 20835.94 3872.58 31933.91 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.04 12278.24 11.99 0.00 0.00 20810.03 4527.94 32172.22 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.05 12267.60 11.98 0.00 0.00 20755.25 3813.00 28240.06 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.05 12257.03 11.97 0.00 0.00 20752.14 4468.36 28359.21 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.05 12246.59 11.96 0.00 0.00 20699.51 3813.00 24546.21 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.05 12235.98 11.95 0.00 0.00 20699.75 4438.57 24665.37 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.05 12225.79 11.94 0.00 0.00 20644.47 3813.00 22639.71 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.05 12215.22 11.93 0.00 0.00 20642.52 4438.57 22520.55 00:29:20.179 [2024-11-29T12:13:25.690Z] =================================================================================================================== 00:29:20.179 [2024-11-29T12:13:25.690Z] Total : 98001.30 95.70 0.00 0.00 20729.81 3813.00 32172.22' 00:29:20.179 12:13:25 -- bdevperf/common.sh@32 -- # echo '[2024-11-29 12:13:22.582321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:20.179 [2024-11-29 12:13:22.582588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144020 ] 00:29:20.179 Using job config with 4 jobs 00:29:20.179 [2024-11-29 12:13:22.728355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.179 [2024-11-29 12:13:22.840420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.179 cpumask for '\''job0'\'' is too big 00:29:20.179 cpumask for '\''job1'\'' is too big 00:29:20.179 cpumask for '\''job2'\'' is too big 00:29:20.179 cpumask for '\''job3'\'' is too big 00:29:20.179 Running I/O for 2 seconds... 00:29:20.179 00:29:20.179 Latency(us) 00:29:20.179 [2024-11-29T12:13:25.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.02 12274.85 11.99 0.00 0.00 20835.94 3872.58 31933.91 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.04 12278.24 11.99 0.00 0.00 20810.03 4527.94 32172.22 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.05 12267.60 11.98 0.00 0.00 20755.25 3813.00 28240.06 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.05 12257.03 11.97 0.00 0.00 20752.14 4468.36 28359.21 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.05 12246.59 11.96 0.00 0.00 20699.51 3813.00 24546.21 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.05 12235.98 11.95 0.00 0.00 20699.75 4438.57 24665.37 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc0 : 2.05 12225.79 11.94 0.00 0.00 20644.47 3813.00 22639.71 00:29:20.179 [2024-11-29T12:13:25.690Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:20.179 Malloc1 : 2.05 12215.22 11.93 0.00 0.00 20642.52 4438.57 22520.55 00:29:20.179 [2024-11-29T12:13:25.690Z] =================================================================================================================== 00:29:20.179 [2024-11-29T12:13:25.690Z] Total : 98001.30 95.70 0.00 0.00 20729.81 3813.00 32172.22' 00:29:20.179 12:13:25 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:20.179 12:13:25 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:20.179 12:13:25 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:29:20.179 12:13:25 -- bdevperf/test_config.sh@44 -- # cleanup 00:29:20.179 12:13:25 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:20.179 12:13:25 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:20.179 ************************************ 00:29:20.179 END TEST bdevperf_config 00:29:20.179 ************************************ 00:29:20.179 00:29:20.179 real 0m11.620s 00:29:20.179 user 0m9.975s 00:29:20.179 sys 0m1.074s 00:29:20.179 12:13:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:20.179 12:13:25 -- common/autotest_common.sh@10 -- # set +x 00:29:20.179 12:13:25 -- spdk/autotest.sh@185 -- # uname -s 00:29:20.179 12:13:25 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:29:20.179 12:13:25 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:20.179 12:13:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:20.179 12:13:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:20.179 12:13:25 -- common/autotest_common.sh@10 -- # set +x 00:29:20.179 ************************************ 00:29:20.179 START TEST reactor_set_interrupt 00:29:20.179 ************************************ 00:29:20.179 12:13:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:20.179 * Looking for test storage... 00:29:20.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.179 12:13:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:20.179 12:13:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:20.179 12:13:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:20.179 12:13:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:20.179 12:13:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:20.179 12:13:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:20.179 12:13:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:20.179 12:13:25 -- scripts/common.sh@335 -- # IFS=.-: 00:29:20.179 12:13:25 -- scripts/common.sh@335 -- # read -ra ver1 00:29:20.179 12:13:25 -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.179 12:13:25 -- scripts/common.sh@336 -- # read -ra ver2 00:29:20.179 12:13:25 -- scripts/common.sh@337 -- # local 'op=<' 00:29:20.179 12:13:25 -- scripts/common.sh@339 -- # ver1_l=2 00:29:20.179 12:13:25 -- scripts/common.sh@340 -- # ver2_l=1 00:29:20.179 12:13:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:20.179 12:13:25 -- scripts/common.sh@343 -- # case "$op" in 00:29:20.179 12:13:25 -- scripts/common.sh@344 -- # : 1 00:29:20.179 12:13:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:20.179 12:13:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.179 12:13:25 -- scripts/common.sh@364 -- # decimal 1 00:29:20.179 12:13:25 -- scripts/common.sh@352 -- # local d=1 00:29:20.179 12:13:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.179 12:13:25 -- scripts/common.sh@354 -- # echo 1 00:29:20.179 12:13:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:20.179 12:13:25 -- scripts/common.sh@365 -- # decimal 2 00:29:20.179 12:13:25 -- scripts/common.sh@352 -- # local d=2 00:29:20.179 12:13:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.179 12:13:25 -- scripts/common.sh@354 -- # echo 2 00:29:20.179 12:13:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:20.179 12:13:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:20.179 12:13:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:20.179 12:13:25 -- scripts/common.sh@367 -- # return 0 00:29:20.179 12:13:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.179 12:13:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.179 --rc genhtml_branch_coverage=1 00:29:20.179 --rc genhtml_function_coverage=1 00:29:20.179 --rc genhtml_legend=1 00:29:20.179 --rc geninfo_all_blocks=1 00:29:20.179 --rc geninfo_unexecuted_blocks=1 00:29:20.179 00:29:20.179 ' 00:29:20.179 12:13:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.179 --rc genhtml_branch_coverage=1 00:29:20.179 --rc genhtml_function_coverage=1 00:29:20.179 --rc genhtml_legend=1 00:29:20.179 --rc geninfo_all_blocks=1 00:29:20.179 --rc geninfo_unexecuted_blocks=1 00:29:20.179 00:29:20.179 ' 00:29:20.179 12:13:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.179 --rc genhtml_branch_coverage=1 00:29:20.179 --rc genhtml_function_coverage=1 00:29:20.179 --rc genhtml_legend=1 00:29:20.179 --rc geninfo_all_blocks=1 00:29:20.179 --rc geninfo_unexecuted_blocks=1 00:29:20.179 00:29:20.179 ' 00:29:20.179 12:13:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.179 --rc genhtml_branch_coverage=1 00:29:20.179 --rc genhtml_function_coverage=1 00:29:20.179 --rc genhtml_legend=1 00:29:20.179 --rc geninfo_all_blocks=1 00:29:20.179 --rc geninfo_unexecuted_blocks=1 00:29:20.179 00:29:20.179 ' 00:29:20.179 12:13:25 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:29:20.179 12:13:25 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:20.179 12:13:25 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.179 12:13:25 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.179 12:13:25 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:29:20.179 12:13:25 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:20.179 12:13:25 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:29:20.179 12:13:25 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:20.179 12:13:25 -- common/autotest_common.sh@34 -- # set -e 00:29:20.179 12:13:25 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:20.179 12:13:25 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:20.179 12:13:25 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:29:20.179 12:13:25 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:29:20.179 12:13:25 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:20.179 12:13:25 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:29:20.179 12:13:25 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:29:20.179 12:13:25 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:20.179 12:13:25 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:20.179 12:13:25 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:20.179 12:13:25 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:20.179 12:13:25 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:20.179 12:13:25 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:20.179 12:13:25 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:20.179 12:13:25 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:20.179 12:13:25 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:20.179 12:13:25 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:20.179 12:13:25 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:20.179 12:13:25 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:20.179 12:13:25 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:20.179 12:13:25 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:20.179 12:13:25 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:20.179 12:13:25 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:20.179 12:13:25 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:20.179 12:13:25 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:20.179 12:13:25 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:20.179 12:13:25 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:29:20.179 12:13:25 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:20.179 12:13:25 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:20.179 12:13:25 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:29:20.179 12:13:25 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:20.179 12:13:25 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:29:20.179 12:13:25 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:20.179 12:13:25 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:20.179 12:13:25 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:20.179 12:13:25 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:20.179 12:13:25 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:20.180 12:13:25 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:20.180 12:13:25 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:20.180 12:13:25 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:29:20.180 12:13:25 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:29:20.180 12:13:25 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:20.180 12:13:25 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:20.180 12:13:25 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:20.180 12:13:25 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:29:20.180 12:13:25 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:20.180 12:13:25 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:29:20.180 12:13:25 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:20.180 12:13:25 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:20.180 12:13:25 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:29:20.180 12:13:25 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:29:20.180 12:13:25 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:20.180 12:13:25 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:29:20.180 12:13:25 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:29:20.180 12:13:25 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:29:20.180 12:13:25 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:29:20.180 12:13:25 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:29:20.180 12:13:25 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:29:20.180 12:13:25 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:29:20.180 12:13:25 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:29:20.180 12:13:25 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:29:20.180 12:13:25 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:29:20.180 12:13:25 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:29:20.180 12:13:25 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:29:20.180 12:13:25 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:29:20.180 12:13:25 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:29:20.180 12:13:25 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:29:20.180 12:13:25 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:29:20.180 12:13:25 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:29:20.180 12:13:25 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:20.180 12:13:25 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:29:20.180 12:13:25 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:29:20.180 12:13:25 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:29:20.180 12:13:25 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:29:20.180 12:13:25 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:29:20.180 12:13:25 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:29:20.180 12:13:25 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:29:20.180 12:13:25 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:29:20.180 12:13:25 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:29:20.180 12:13:25 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:29:20.180 12:13:25 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:29:20.180 12:13:25 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:29:20.180 12:13:25 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:29:20.180 12:13:25 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:20.180 12:13:25 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:20.180 12:13:25 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:29:20.180 12:13:25 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:29:20.180 12:13:25 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:29:20.180 12:13:25 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:29:20.180 12:13:25 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:29:20.180 12:13:25 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:29:20.180 12:13:25 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:20.180 12:13:25 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:20.180 12:13:25 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:20.180 12:13:25 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:20.180 12:13:25 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:20.180 12:13:25 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:20.180 12:13:25 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:29:20.180 12:13:25 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:20.180 #define SPDK_CONFIG_H 00:29:20.180 #define SPDK_CONFIG_APPS 1 00:29:20.180 #define SPDK_CONFIG_ARCH native 00:29:20.180 #define SPDK_CONFIG_ASAN 1 00:29:20.180 #undef SPDK_CONFIG_AVAHI 00:29:20.180 #undef SPDK_CONFIG_CET 00:29:20.180 #define SPDK_CONFIG_COVERAGE 1 00:29:20.180 #define SPDK_CONFIG_CROSS_PREFIX 00:29:20.180 #undef SPDK_CONFIG_CRYPTO 00:29:20.180 #undef SPDK_CONFIG_CRYPTO_MLX5 00:29:20.180 #undef SPDK_CONFIG_CUSTOMOCF 00:29:20.180 #undef SPDK_CONFIG_DAOS 00:29:20.180 #define SPDK_CONFIG_DAOS_DIR 00:29:20.180 #define SPDK_CONFIG_DEBUG 1 00:29:20.180 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:29:20.180 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:29:20.180 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:29:20.180 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:29:20.180 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:20.180 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:20.180 #define SPDK_CONFIG_EXAMPLES 1 00:29:20.180 #undef SPDK_CONFIG_FC 00:29:20.180 #define SPDK_CONFIG_FC_PATH 00:29:20.180 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:20.180 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:20.180 #undef SPDK_CONFIG_FUSE 00:29:20.180 #undef SPDK_CONFIG_FUZZER 00:29:20.180 #define SPDK_CONFIG_FUZZER_LIB 00:29:20.180 #undef SPDK_CONFIG_GOLANG 00:29:20.180 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:29:20.180 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:20.180 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:20.180 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:20.180 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:20.180 #define SPDK_CONFIG_IDXD 1 00:29:20.180 #undef SPDK_CONFIG_IDXD_KERNEL 00:29:20.180 #undef SPDK_CONFIG_IPSEC_MB 00:29:20.180 #define SPDK_CONFIG_IPSEC_MB_DIR 00:29:20.180 #define SPDK_CONFIG_ISAL 1 00:29:20.180 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:20.180 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:20.180 #define SPDK_CONFIG_LIBDIR 00:29:20.180 #undef SPDK_CONFIG_LTO 00:29:20.180 #define SPDK_CONFIG_MAX_LCORES 00:29:20.180 #define SPDK_CONFIG_NVME_CUSE 1 00:29:20.180 #undef SPDK_CONFIG_OCF 00:29:20.180 #define SPDK_CONFIG_OCF_PATH 00:29:20.180 #define SPDK_CONFIG_OPENSSL_PATH 00:29:20.180 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:20.180 #undef SPDK_CONFIG_PGO_USE 00:29:20.180 #define SPDK_CONFIG_PREFIX /usr/local 00:29:20.180 #define SPDK_CONFIG_RAID5F 1 00:29:20.180 #undef SPDK_CONFIG_RBD 00:29:20.180 #define SPDK_CONFIG_RDMA 1 00:29:20.180 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:20.180 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:20.180 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:20.180 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:20.180 #undef SPDK_CONFIG_SHARED 00:29:20.180 #undef SPDK_CONFIG_SMA 00:29:20.180 #define SPDK_CONFIG_TESTS 1 00:29:20.180 #undef SPDK_CONFIG_TSAN 00:29:20.180 #undef SPDK_CONFIG_UBLK 00:29:20.180 #define SPDK_CONFIG_UBSAN 1 00:29:20.180 #define SPDK_CONFIG_UNIT_TESTS 1 00:29:20.180 #undef SPDK_CONFIG_URING 00:29:20.180 #define SPDK_CONFIG_URING_PATH 00:29:20.180 #undef SPDK_CONFIG_URING_ZNS 00:29:20.180 #undef SPDK_CONFIG_USDT 00:29:20.180 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:29:20.180 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:29:20.180 #undef SPDK_CONFIG_VFIO_USER 00:29:20.180 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:20.180 #define SPDK_CONFIG_VHOST 1 00:29:20.180 #define SPDK_CONFIG_VIRTIO 1 00:29:20.180 #undef SPDK_CONFIG_VTUNE 00:29:20.180 #define SPDK_CONFIG_VTUNE_DIR 00:29:20.180 #define SPDK_CONFIG_WERROR 1 00:29:20.180 #define SPDK_CONFIG_WPDK_DIR 00:29:20.180 #undef SPDK_CONFIG_XNVME 00:29:20.180 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:20.180 12:13:25 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:20.180 12:13:25 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:20.180 12:13:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.180 12:13:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.180 12:13:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.180 12:13:25 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:20.180 12:13:25 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:20.180 12:13:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:20.180 12:13:25 -- paths/export.sh@5 -- # export PATH 00:29:20.180 12:13:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:20.180 12:13:25 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:20.180 12:13:25 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:20.180 12:13:25 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:20.180 12:13:25 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:20.180 12:13:25 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:29:20.180 12:13:25 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:29:20.180 12:13:25 -- pm/common@16 -- # TEST_TAG=N/A 00:29:20.180 12:13:25 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:29:20.180 12:13:25 -- common/autotest_common.sh@52 -- # : 1 00:29:20.180 12:13:25 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:29:20.440 12:13:25 -- common/autotest_common.sh@56 -- # : 0 00:29:20.440 12:13:25 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:20.440 12:13:25 -- common/autotest_common.sh@58 -- # : 0 00:29:20.440 12:13:25 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:29:20.440 12:13:25 -- common/autotest_common.sh@60 -- # : 1 00:29:20.440 12:13:25 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:20.440 12:13:25 -- common/autotest_common.sh@62 -- # : 1 00:29:20.440 12:13:25 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:29:20.440 12:13:25 -- common/autotest_common.sh@64 -- # : 00:29:20.440 12:13:25 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:29:20.440 12:13:25 -- common/autotest_common.sh@66 -- # : 0 00:29:20.440 12:13:25 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:29:20.440 12:13:25 -- common/autotest_common.sh@68 -- # : 0 00:29:20.440 12:13:25 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:29:20.440 12:13:25 -- common/autotest_common.sh@70 -- # : 0 00:29:20.440 12:13:25 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:29:20.440 12:13:25 -- common/autotest_common.sh@72 -- # : 0 00:29:20.440 12:13:25 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:20.440 12:13:25 -- common/autotest_common.sh@74 -- # : 1 00:29:20.441 12:13:25 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:29:20.441 12:13:25 -- common/autotest_common.sh@76 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:29:20.441 12:13:25 -- common/autotest_common.sh@78 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:29:20.441 12:13:25 -- common/autotest_common.sh@80 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:29:20.441 12:13:25 -- common/autotest_common.sh@82 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:29:20.441 12:13:25 -- common/autotest_common.sh@84 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:29:20.441 12:13:25 -- common/autotest_common.sh@86 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:29:20.441 12:13:25 -- common/autotest_common.sh@88 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:29:20.441 12:13:25 -- common/autotest_common.sh@90 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:20.441 12:13:25 -- common/autotest_common.sh@92 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:29:20.441 12:13:25 -- common/autotest_common.sh@94 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:29:20.441 12:13:25 -- common/autotest_common.sh@96 -- # : rdma 00:29:20.441 12:13:25 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:20.441 12:13:25 -- common/autotest_common.sh@98 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:29:20.441 12:13:25 -- common/autotest_common.sh@100 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:29:20.441 12:13:25 -- common/autotest_common.sh@102 -- # : 1 00:29:20.441 12:13:25 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:29:20.441 12:13:25 -- common/autotest_common.sh@104 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:29:20.441 12:13:25 -- common/autotest_common.sh@106 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:29:20.441 12:13:25 -- common/autotest_common.sh@108 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:29:20.441 12:13:25 -- common/autotest_common.sh@110 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:29:20.441 12:13:25 -- common/autotest_common.sh@112 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:20.441 12:13:25 -- common/autotest_common.sh@114 -- # : 1 00:29:20.441 12:13:25 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:29:20.441 12:13:25 -- common/autotest_common.sh@116 -- # : 1 00:29:20.441 12:13:25 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:29:20.441 12:13:25 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:29:20.441 12:13:25 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:20.441 12:13:25 -- common/autotest_common.sh@120 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:29:20.441 12:13:25 -- common/autotest_common.sh@122 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:29:20.441 12:13:25 -- common/autotest_common.sh@124 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:29:20.441 12:13:25 -- common/autotest_common.sh@126 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:29:20.441 12:13:25 -- common/autotest_common.sh@128 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:29:20.441 12:13:25 -- common/autotest_common.sh@130 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:29:20.441 12:13:25 -- common/autotest_common.sh@132 -- # : v22.11.4 00:29:20.441 12:13:25 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:29:20.441 12:13:25 -- common/autotest_common.sh@134 -- # : true 00:29:20.441 12:13:25 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:29:20.441 12:13:25 -- common/autotest_common.sh@136 -- # : 1 00:29:20.441 12:13:25 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:29:20.441 12:13:25 -- common/autotest_common.sh@138 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:29:20.441 12:13:25 -- common/autotest_common.sh@140 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:29:20.441 12:13:25 -- common/autotest_common.sh@142 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:29:20.441 12:13:25 -- common/autotest_common.sh@144 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:29:20.441 12:13:25 -- common/autotest_common.sh@146 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:29:20.441 12:13:25 -- common/autotest_common.sh@148 -- # : 00:29:20.441 12:13:25 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:29:20.441 12:13:25 -- common/autotest_common.sh@150 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:29:20.441 12:13:25 -- common/autotest_common.sh@152 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:29:20.441 12:13:25 -- common/autotest_common.sh@154 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:29:20.441 12:13:25 -- common/autotest_common.sh@156 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:29:20.441 12:13:25 -- common/autotest_common.sh@158 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:29:20.441 12:13:25 -- common/autotest_common.sh@160 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:29:20.441 12:13:25 -- common/autotest_common.sh@163 -- # : 00:29:20.441 12:13:25 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:29:20.441 12:13:25 -- common/autotest_common.sh@165 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:29:20.441 12:13:25 -- common/autotest_common.sh@167 -- # : 0 00:29:20.441 12:13:25 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:20.441 12:13:25 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:20.441 12:13:25 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:20.441 12:13:25 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:29:20.441 12:13:25 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:29:20.441 12:13:25 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:20.441 12:13:25 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:20.441 12:13:25 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:20.442 12:13:25 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:20.442 12:13:25 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:20.442 12:13:25 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:20.442 12:13:25 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:20.442 12:13:25 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:20.442 12:13:25 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:20.442 12:13:25 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:29:20.442 12:13:25 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:20.442 12:13:25 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:20.442 12:13:25 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:20.442 12:13:25 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:20.442 12:13:25 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:20.442 12:13:25 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:29:20.442 12:13:25 -- common/autotest_common.sh@196 -- # cat 00:29:20.442 12:13:25 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:29:20.442 12:13:25 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:20.442 12:13:25 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:20.442 12:13:25 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:20.442 12:13:25 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:20.442 12:13:25 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:29:20.442 12:13:25 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:29:20.442 12:13:25 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:20.442 12:13:25 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:20.442 12:13:25 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:20.442 12:13:25 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:20.442 12:13:25 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:29:20.442 12:13:25 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:29:20.442 12:13:25 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:20.442 12:13:25 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:20.442 12:13:25 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:20.442 12:13:25 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:20.442 12:13:25 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:20.442 12:13:25 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:20.442 12:13:25 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:29:20.442 12:13:25 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:29:20.442 12:13:25 -- common/autotest_common.sh@249 -- # _LCOV= 00:29:20.442 12:13:25 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:29:20.442 12:13:25 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:29:20.442 12:13:25 -- common/autotest_common.sh@255 -- # lcov_opt= 00:29:20.442 12:13:25 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:29:20.442 12:13:25 -- common/autotest_common.sh@259 -- # export valgrind= 00:29:20.442 12:13:25 -- common/autotest_common.sh@259 -- # valgrind= 00:29:20.442 12:13:25 -- common/autotest_common.sh@265 -- # uname -s 00:29:20.442 12:13:25 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:29:20.442 12:13:25 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:29:20.442 12:13:25 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:29:20.442 12:13:25 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:29:20.442 12:13:25 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@275 -- # MAKE=make 00:29:20.442 12:13:25 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:29:20.442 12:13:25 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:29:20.442 12:13:25 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:29:20.442 12:13:25 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:29:20.442 12:13:25 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:29:20.442 12:13:25 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:29:20.442 12:13:25 -- common/autotest_common.sh@319 -- # [[ -z 144096 ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@319 -- # kill -0 144096 00:29:20.442 12:13:25 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:29:20.442 12:13:25 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:29:20.442 12:13:25 -- common/autotest_common.sh@332 -- # local mount target_dir 00:29:20.442 12:13:25 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:29:20.442 12:13:25 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:29:20.442 12:13:25 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:29:20.442 12:13:25 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:29:20.442 12:13:25 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.1jUals 00:29:20.442 12:13:25 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:20.442 12:13:25 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:29:20.442 12:13:25 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.1jUals/tests/interrupt /tmp/spdk.1jUals 00:29:20.442 12:13:25 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:29:20.442 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.442 12:13:25 -- common/autotest_common.sh@328 -- # df -T 00:29:20.442 12:13:25 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248944128 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:29:20.442 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=4734976 00:29:20.442 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=8792256512 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:29:20.442 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=11807760384 00:29:20.442 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267133952 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268391424 00:29:20.442 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:29:20.442 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:20.442 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:29:20.442 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:29:20.442 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:29:20.443 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.443 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:29:20.443 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:29:20.443 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:29:20.443 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:29:20.443 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:29:20.443 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.443 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:20.443 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:20.443 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253670912 00:29:20.443 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253675008 00:29:20.443 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:29:20.443 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.443 12:13:25 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:29:20.443 12:13:25 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:29:20.443 12:13:25 -- common/autotest_common.sh@363 -- # avails["$mount"]=93591556096 00:29:20.443 12:13:25 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:29:20.443 12:13:25 -- common/autotest_common.sh@364 -- # uses["$mount"]=6111223808 00:29:20.443 12:13:25 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:20.443 12:13:25 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:29:20.443 * Looking for test storage... 00:29:20.443 12:13:25 -- common/autotest_common.sh@369 -- # local target_space new_size 00:29:20.443 12:13:25 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:29:20.443 12:13:25 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:20.443 12:13:25 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.443 12:13:25 -- common/autotest_common.sh@373 -- # mount=/ 00:29:20.443 12:13:25 -- common/autotest_common.sh@375 -- # target_space=8792256512 00:29:20.443 12:13:25 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:29:20.443 12:13:25 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:29:20.443 12:13:25 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:29:20.443 12:13:25 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:29:20.443 12:13:25 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:29:20.443 12:13:25 -- common/autotest_common.sh@382 -- # new_size=14022352896 00:29:20.443 12:13:25 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:20.443 12:13:25 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.443 12:13:25 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.443 12:13:25 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:20.443 12:13:25 -- common/autotest_common.sh@390 -- # return 0 00:29:20.443 12:13:25 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:29:20.443 12:13:25 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:29:20.443 12:13:25 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:20.443 12:13:25 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:20.443 12:13:25 -- common/autotest_common.sh@1682 -- # true 00:29:20.443 12:13:25 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:29:20.443 12:13:25 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:20.443 12:13:25 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:20.443 12:13:25 -- common/autotest_common.sh@27 -- # exec 00:29:20.443 12:13:25 -- common/autotest_common.sh@29 -- # exec 00:29:20.443 12:13:25 -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:20.443 12:13:25 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:20.443 12:13:25 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:20.443 12:13:25 -- common/autotest_common.sh@18 -- # set -x 00:29:20.443 12:13:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:20.443 12:13:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:20.443 12:13:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:20.443 12:13:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:20.443 12:13:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:20.443 12:13:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:20.443 12:13:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:20.443 12:13:25 -- scripts/common.sh@335 -- # IFS=.-: 00:29:20.443 12:13:25 -- scripts/common.sh@335 -- # read -ra ver1 00:29:20.443 12:13:25 -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.443 12:13:25 -- scripts/common.sh@336 -- # read -ra ver2 00:29:20.443 12:13:25 -- scripts/common.sh@337 -- # local 'op=<' 00:29:20.443 12:13:25 -- scripts/common.sh@339 -- # ver1_l=2 00:29:20.443 12:13:25 -- scripts/common.sh@340 -- # ver2_l=1 00:29:20.443 12:13:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:20.443 12:13:25 -- scripts/common.sh@343 -- # case "$op" in 00:29:20.443 12:13:25 -- scripts/common.sh@344 -- # : 1 00:29:20.443 12:13:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:20.443 12:13:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.443 12:13:25 -- scripts/common.sh@364 -- # decimal 1 00:29:20.443 12:13:25 -- scripts/common.sh@352 -- # local d=1 00:29:20.443 12:13:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.443 12:13:25 -- scripts/common.sh@354 -- # echo 1 00:29:20.443 12:13:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:20.443 12:13:25 -- scripts/common.sh@365 -- # decimal 2 00:29:20.443 12:13:25 -- scripts/common.sh@352 -- # local d=2 00:29:20.443 12:13:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.443 12:13:25 -- scripts/common.sh@354 -- # echo 2 00:29:20.443 12:13:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:20.443 12:13:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:20.443 12:13:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:20.443 12:13:25 -- scripts/common.sh@367 -- # return 0 00:29:20.443 12:13:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.443 12:13:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.443 --rc genhtml_branch_coverage=1 00:29:20.443 --rc genhtml_function_coverage=1 00:29:20.443 --rc genhtml_legend=1 00:29:20.443 --rc geninfo_all_blocks=1 00:29:20.443 --rc geninfo_unexecuted_blocks=1 00:29:20.443 00:29:20.443 ' 00:29:20.443 12:13:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.443 --rc genhtml_branch_coverage=1 00:29:20.443 --rc genhtml_function_coverage=1 00:29:20.443 --rc genhtml_legend=1 00:29:20.443 --rc geninfo_all_blocks=1 00:29:20.443 --rc geninfo_unexecuted_blocks=1 00:29:20.443 00:29:20.443 ' 00:29:20.443 12:13:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.443 --rc genhtml_branch_coverage=1 00:29:20.443 --rc genhtml_function_coverage=1 00:29:20.443 --rc genhtml_legend=1 00:29:20.443 --rc geninfo_all_blocks=1 00:29:20.443 --rc geninfo_unexecuted_blocks=1 00:29:20.443 00:29:20.443 ' 00:29:20.444 12:13:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.444 --rc genhtml_branch_coverage=1 00:29:20.444 --rc genhtml_function_coverage=1 00:29:20.444 --rc genhtml_legend=1 00:29:20.444 --rc geninfo_all_blocks=1 00:29:20.444 --rc geninfo_unexecuted_blocks=1 00:29:20.444 00:29:20.444 ' 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:20.444 12:13:25 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:20.444 12:13:25 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:20.444 12:13:25 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144152 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:20.444 12:13:25 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144152 /var/tmp/spdk.sock 00:29:20.444 12:13:25 -- common/autotest_common.sh@829 -- # '[' -z 144152 ']' 00:29:20.444 12:13:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.444 12:13:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.444 12:13:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.444 12:13:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.444 12:13:25 -- common/autotest_common.sh@10 -- # set +x 00:29:20.444 [2024-11-29 12:13:25.944457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:20.444 [2024-11-29 12:13:25.944799] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144152 ] 00:29:20.702 [2024-11-29 12:13:26.113455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:20.702 [2024-11-29 12:13:26.210122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.703 [2024-11-29 12:13:26.210210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.703 [2024-11-29 12:13:26.210213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.961 [2024-11-29 12:13:26.296860] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:21.527 12:13:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.527 12:13:26 -- common/autotest_common.sh@862 -- # return 0 00:29:21.527 12:13:26 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:29:21.527 12:13:26 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:21.786 Malloc0 00:29:21.786 Malloc1 00:29:21.786 Malloc2 00:29:21.786 12:13:27 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:29:21.786 12:13:27 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:21.786 12:13:27 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:21.786 12:13:27 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:21.786 5000+0 records in 00:29:21.786 5000+0 records out 00:29:21.786 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0249607 s, 410 MB/s 00:29:21.786 12:13:27 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:29:22.043 AIO0 00:29:22.302 12:13:27 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 144152 00:29:22.302 12:13:27 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 144152 without_thd 00:29:22.302 12:13:27 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=144152 00:29:22.302 12:13:27 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:29:22.302 12:13:27 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:22.302 12:13:27 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:22.302 12:13:27 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:29:22.302 12:13:27 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:22.302 12:13:27 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:29:22.302 12:13:27 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:22.302 12:13:27 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:22.302 12:13:27 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:29:22.560 12:13:27 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:22.560 12:13:27 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:22.560 12:13:27 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:29:22.818 12:13:28 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:22.818 spdk_thread ids are 1 on reactor0. 00:29:22.818 12:13:28 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:22.818 12:13:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:22.818 12:13:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144152 0 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144152 0 idle 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144152 root 20 0 20.1t 58120 26140 S 0.0 0.5 0:00.38 reactor_0' 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@48 -- # echo 144152 root 20 0 20.1t 58120 26140 S 0.0 0.5 0:00.38 reactor_0 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:22.818 12:13:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:22.818 12:13:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144152 1 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144152 1 idle 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:22.818 12:13:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144161 root 20 0 20.1t 58120 26140 S 0.0 0.5 0:00.00 reactor_1' 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@48 -- # echo 144161 root 20 0 20.1t 58120 26140 S 0.0 0.5 0:00.00 reactor_1 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:23.075 12:13:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:23.075 12:13:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144152 2 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144152 2 idle 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:23.075 12:13:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144162 root 20 0 20.1t 58120 26140 S 0.0 0.5 0:00.00 reactor_2' 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@48 -- # echo 144162 root 20 0 20.1t 58120 26140 S 0.0 0.5 0:00.00 reactor_2 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:23.333 12:13:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:23.333 12:13:28 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:29:23.333 12:13:28 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:29:23.333 12:13:28 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:29:23.593 [2024-11-29 12:13:28.885466] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:23.593 12:13:28 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:23.852 [2024-11-29 12:13:29.125312] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:23.852 [2024-11-29 12:13:29.126048] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:23.852 12:13:29 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:23.852 [2024-11-29 12:13:29.365185] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:23.852 [2024-11-29 12:13:29.366044] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:24.110 12:13:29 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:24.110 12:13:29 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144152 0 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144152 0 busy 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144152 root 20 0 20.1t 58284 26140 R 99.9 0.5 0:00.80 reactor_0' 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@48 -- # echo 144152 root 20 0 20.1t 58284 26140 R 99.9 0.5 0:00.80 reactor_0 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:24.110 12:13:29 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:24.110 12:13:29 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144152 2 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144152 2 busy 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:24.110 12:13:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144162 root 20 0 20.1t 58284 26140 R 99.9 0.5 0:00.35 reactor_2' 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@48 -- # echo 144162 root 20 0 20.1t 58284 26140 R 99.9 0.5 0:00.35 reactor_2 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:24.369 12:13:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:24.369 12:13:29 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:24.627 [2024-11-29 12:13:29.953147] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:24.627 [2024-11-29 12:13:29.953905] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:24.627 12:13:29 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:29:24.627 12:13:29 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 144152 2 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144152 2 idle 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:24.627 12:13:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:24.627 12:13:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144162 root 20 0 20.1t 58332 26140 S 0.0 0.5 0:00.58 reactor_2' 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@48 -- # echo 144162 root 20 0 20.1t 58332 26140 S 0.0 0.5 0:00.58 reactor_2 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:24.885 12:13:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:24.885 12:13:30 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:25.144 [2024-11-29 12:13:30.425153] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:25.144 [2024-11-29 12:13:30.425971] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:25.144 12:13:30 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:29:25.144 12:13:30 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:29:25.144 12:13:30 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:29:25.402 [2024-11-29 12:13:30.721610] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:25.402 12:13:30 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 144152 0 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144152 0 idle 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@33 -- # local pid=144152 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144152 -w 256 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144152 root 20 0 20.1t 58436 26140 S 6.7 0.5 0:01.69 reactor_0' 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@48 -- # echo 144152 root 20 0 20.1t 58436 26140 S 6.7 0.5 0:01.69 reactor_0 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:29:25.402 12:13:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:25.402 12:13:30 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:25.402 12:13:30 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:29:25.402 12:13:30 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:29:25.402 12:13:30 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 144152 00:29:25.402 12:13:30 -- common/autotest_common.sh@936 -- # '[' -z 144152 ']' 00:29:25.402 12:13:30 -- common/autotest_common.sh@940 -- # kill -0 144152 00:29:25.402 12:13:30 -- common/autotest_common.sh@941 -- # uname 00:29:25.661 12:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:25.661 12:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144152 00:29:25.661 12:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:25.661 12:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:25.661 killing process with pid 144152 00:29:25.661 12:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144152' 00:29:25.661 12:13:30 -- common/autotest_common.sh@955 -- # kill 144152 00:29:25.661 12:13:30 -- common/autotest_common.sh@960 -- # wait 144152 00:29:25.920 12:13:31 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:25.920 12:13:31 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144297 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:25.920 12:13:31 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144297 /var/tmp/spdk.sock 00:29:25.920 12:13:31 -- common/autotest_common.sh@829 -- # '[' -z 144297 ']' 00:29:25.920 12:13:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.920 12:13:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.920 12:13:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.920 12:13:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.920 12:13:31 -- common/autotest_common.sh@10 -- # set +x 00:29:25.920 [2024-11-29 12:13:31.319021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:25.920 [2024-11-29 12:13:31.319270] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144297 ] 00:29:26.178 [2024-11-29 12:13:31.476900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:26.178 [2024-11-29 12:13:31.575386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.178 [2024-11-29 12:13:31.575567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.178 [2024-11-29 12:13:31.575652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.178 [2024-11-29 12:13:31.663349] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:27.113 12:13:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.113 12:13:32 -- common/autotest_common.sh@862 -- # return 0 00:29:27.113 12:13:32 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:29:27.113 12:13:32 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:27.113 Malloc0 00:29:27.113 Malloc1 00:29:27.113 Malloc2 00:29:27.113 12:13:32 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:29:27.113 12:13:32 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:27.113 12:13:32 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:27.113 12:13:32 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:27.371 5000+0 records in 00:29:27.371 5000+0 records out 00:29:27.371 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0240426 s, 426 MB/s 00:29:27.371 12:13:32 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:29:27.630 AIO0 00:29:27.630 12:13:32 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 144297 00:29:27.630 12:13:32 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 144297 00:29:27.630 12:13:32 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=144297 00:29:27.630 12:13:32 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:29:27.630 12:13:32 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:27.630 12:13:32 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:27.630 12:13:32 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:29:27.630 12:13:32 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:27.630 12:13:32 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:29:27.630 12:13:32 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:27.630 12:13:32 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:27.630 12:13:32 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:29:27.889 12:13:33 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:27.889 12:13:33 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:27.889 12:13:33 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:29:28.147 12:13:33 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:28.147 12:13:33 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:28.147 spdk_thread ids are 1 on reactor0. 00:29:28.147 12:13:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:28.147 12:13:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144297 0 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144297 0 idle 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:28.147 12:13:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144297 root 20 0 20.1t 57076 26344 S 6.7 0.5 0:00.35 reactor_0' 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # echo 144297 root 20 0 20.1t 57076 26344 S 6.7 0.5 0:00.35 reactor_0 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:28.406 12:13:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:28.406 12:13:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144297 1 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144297 1 idle 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144301 root 20 0 20.1t 57076 26344 S 0.0 0.5 0:00.00 reactor_1' 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # echo 144301 root 20 0 20.1t 57076 26344 S 0.0 0.5 0:00.00 reactor_1 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:28.406 12:13:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:28.406 12:13:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:28.406 12:13:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144297 2 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144297 2 idle 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:28.407 12:13:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144302 root 20 0 20.1t 57076 26344 S 0.0 0.5 0:00.00 reactor_2' 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@48 -- # echo 144302 root 20 0 20.1t 57076 26344 S 0.0 0.5 0:00.00 reactor_2 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:28.665 12:13:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:28.666 12:13:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:28.666 12:13:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:28.666 12:13:34 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:29:28.666 12:13:34 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:28.924 [2024-11-29 12:13:34.287624] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:28.924 [2024-11-29 12:13:34.287936] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:29:28.924 [2024-11-29 12:13:34.288559] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:28.924 12:13:34 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:29.219 [2024-11-29 12:13:34.567622] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:29.219 [2024-11-29 12:13:34.568361] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:29.219 12:13:34 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:29.219 12:13:34 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144297 0 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144297 0 busy 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:29.219 12:13:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144297 root 20 0 20.1t 57184 26344 R 99.9 0.5 0:00.81 reactor_0' 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # echo 144297 root 20 0 20.1t 57184 26344 R 99.9 0.5 0:00.81 reactor_0 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:29.486 12:13:34 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:29.486 12:13:34 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144297 2 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144297 2 busy 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144302 root 20 0 20.1t 57184 26344 R 99.9 0.5 0:00.35 reactor_2' 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # echo 144302 root 20 0 20.1t 57184 26344 R 99.9 0.5 0:00.35 reactor_2 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:29.486 12:13:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:29.486 12:13:34 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:29.759 [2024-11-29 12:13:35.223855] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:29.759 [2024-11-29 12:13:35.227829] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:29.759 12:13:35 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:29:29.759 12:13:35 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 144297 2 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144297 2 idle 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:29.759 12:13:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:30.021 12:13:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144302 root 20 0 20.1t 57252 26344 S 0.0 0.5 0:00.65 reactor_2' 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@48 -- # echo 144302 root 20 0 20.1t 57252 26344 S 0.0 0.5 0:00.65 reactor_2 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:30.022 12:13:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:30.022 12:13:35 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:30.280 [2024-11-29 12:13:35.647835] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:30.280 [2024-11-29 12:13:35.648181] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:29:30.280 [2024-11-29 12:13:35.648236] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:30.280 12:13:35 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:29:30.280 12:13:35 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 144297 0 00:29:30.280 12:13:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144297 0 idle 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@33 -- # local pid=144297 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144297 -w 256 00:29:30.281 12:13:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144297 root 20 0 20.1t 57308 26344 S 0.0 0.5 0:01.71 reactor_0' 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@48 -- # echo 144297 root 20 0 20.1t 57308 26344 S 0.0 0.5 0:01.71 reactor_0 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:30.539 12:13:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:30.539 12:13:35 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:30.539 12:13:35 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:29:30.539 12:13:35 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:30.539 12:13:35 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 144297 00:29:30.539 12:13:35 -- common/autotest_common.sh@936 -- # '[' -z 144297 ']' 00:29:30.539 12:13:35 -- common/autotest_common.sh@940 -- # kill -0 144297 00:29:30.539 12:13:35 -- common/autotest_common.sh@941 -- # uname 00:29:30.539 12:13:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:30.539 12:13:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144297 00:29:30.539 12:13:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:30.539 killing process with pid 144297 00:29:30.539 12:13:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:30.539 12:13:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144297' 00:29:30.539 12:13:35 -- common/autotest_common.sh@955 -- # kill 144297 00:29:30.539 12:13:35 -- common/autotest_common.sh@960 -- # wait 144297 00:29:30.799 12:13:36 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:29:30.799 12:13:36 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:30.799 ************************************ 00:29:30.799 END TEST reactor_set_interrupt 00:29:30.799 ************************************ 00:29:30.799 00:29:30.799 real 0m10.736s 00:29:30.799 user 0m10.705s 00:29:30.799 sys 0m1.692s 00:29:30.799 12:13:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:30.799 12:13:36 -- common/autotest_common.sh@10 -- # set +x 00:29:30.799 12:13:36 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:30.799 12:13:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:30.799 12:13:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.799 12:13:36 -- common/autotest_common.sh@10 -- # set +x 00:29:30.799 ************************************ 00:29:30.799 START TEST reap_unregistered_poller 00:29:30.799 ************************************ 00:29:30.799 12:13:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:30.799 * Looking for test storage... 00:29:31.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.060 12:13:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:31.060 12:13:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:31.060 12:13:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:31.060 12:13:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:31.060 12:13:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:31.060 12:13:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:31.060 12:13:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:31.060 12:13:36 -- scripts/common.sh@335 -- # IFS=.-: 00:29:31.060 12:13:36 -- scripts/common.sh@335 -- # read -ra ver1 00:29:31.060 12:13:36 -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.060 12:13:36 -- scripts/common.sh@336 -- # read -ra ver2 00:29:31.060 12:13:36 -- scripts/common.sh@337 -- # local 'op=<' 00:29:31.060 12:13:36 -- scripts/common.sh@339 -- # ver1_l=2 00:29:31.060 12:13:36 -- scripts/common.sh@340 -- # ver2_l=1 00:29:31.060 12:13:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:31.060 12:13:36 -- scripts/common.sh@343 -- # case "$op" in 00:29:31.060 12:13:36 -- scripts/common.sh@344 -- # : 1 00:29:31.060 12:13:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:31.060 12:13:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.060 12:13:36 -- scripts/common.sh@364 -- # decimal 1 00:29:31.060 12:13:36 -- scripts/common.sh@352 -- # local d=1 00:29:31.060 12:13:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.060 12:13:36 -- scripts/common.sh@354 -- # echo 1 00:29:31.060 12:13:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:31.060 12:13:36 -- scripts/common.sh@365 -- # decimal 2 00:29:31.060 12:13:36 -- scripts/common.sh@352 -- # local d=2 00:29:31.060 12:13:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.060 12:13:36 -- scripts/common.sh@354 -- # echo 2 00:29:31.060 12:13:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:31.060 12:13:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:31.060 12:13:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:31.060 12:13:36 -- scripts/common.sh@367 -- # return 0 00:29:31.060 12:13:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.060 12:13:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.060 --rc genhtml_branch_coverage=1 00:29:31.060 --rc genhtml_function_coverage=1 00:29:31.060 --rc genhtml_legend=1 00:29:31.060 --rc geninfo_all_blocks=1 00:29:31.060 --rc geninfo_unexecuted_blocks=1 00:29:31.060 00:29:31.060 ' 00:29:31.060 12:13:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.060 --rc genhtml_branch_coverage=1 00:29:31.060 --rc genhtml_function_coverage=1 00:29:31.060 --rc genhtml_legend=1 00:29:31.060 --rc geninfo_all_blocks=1 00:29:31.060 --rc geninfo_unexecuted_blocks=1 00:29:31.060 00:29:31.060 ' 00:29:31.060 12:13:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.060 --rc genhtml_branch_coverage=1 00:29:31.061 --rc genhtml_function_coverage=1 00:29:31.061 --rc genhtml_legend=1 00:29:31.061 --rc geninfo_all_blocks=1 00:29:31.061 --rc geninfo_unexecuted_blocks=1 00:29:31.061 00:29:31.061 ' 00:29:31.061 12:13:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:31.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.061 --rc genhtml_branch_coverage=1 00:29:31.061 --rc genhtml_function_coverage=1 00:29:31.061 --rc genhtml_legend=1 00:29:31.061 --rc geninfo_all_blocks=1 00:29:31.061 --rc geninfo_unexecuted_blocks=1 00:29:31.061 00:29:31.061 ' 00:29:31.061 12:13:36 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:29:31.061 12:13:36 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:31.061 12:13:36 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.061 12:13:36 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.061 12:13:36 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:29:31.061 12:13:36 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:31.061 12:13:36 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:29:31.061 12:13:36 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:31.061 12:13:36 -- common/autotest_common.sh@34 -- # set -e 00:29:31.061 12:13:36 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:31.061 12:13:36 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:31.061 12:13:36 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:29:31.061 12:13:36 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:29:31.061 12:13:36 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:31.061 12:13:36 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:29:31.061 12:13:36 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:29:31.061 12:13:36 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:31.061 12:13:36 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:31.061 12:13:36 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:31.061 12:13:36 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:31.061 12:13:36 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:31.061 12:13:36 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:31.061 12:13:36 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:31.061 12:13:36 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:31.061 12:13:36 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:31.061 12:13:36 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:31.061 12:13:36 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:31.061 12:13:36 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:31.061 12:13:36 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:31.061 12:13:36 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:31.061 12:13:36 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:31.061 12:13:36 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:31.061 12:13:36 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:31.061 12:13:36 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:31.061 12:13:36 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:31.061 12:13:36 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:29:31.061 12:13:36 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:31.061 12:13:36 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:31.061 12:13:36 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:29:31.061 12:13:36 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:31.061 12:13:36 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:29:31.061 12:13:36 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:31.061 12:13:36 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:31.061 12:13:36 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:31.061 12:13:36 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:31.061 12:13:36 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:31.061 12:13:36 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:31.061 12:13:36 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:31.061 12:13:36 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:29:31.061 12:13:36 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:29:31.061 12:13:36 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:31.061 12:13:36 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:31.061 12:13:36 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:31.061 12:13:36 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:29:31.061 12:13:36 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:31.061 12:13:36 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:29:31.061 12:13:36 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:31.061 12:13:36 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:31.061 12:13:36 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:29:31.061 12:13:36 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:29:31.061 12:13:36 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:31.061 12:13:36 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:29:31.061 12:13:36 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:29:31.061 12:13:36 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:29:31.061 12:13:36 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:29:31.061 12:13:36 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:29:31.061 12:13:36 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:29:31.061 12:13:36 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:29:31.061 12:13:36 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:29:31.061 12:13:36 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:29:31.061 12:13:36 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:29:31.061 12:13:36 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:29:31.061 12:13:36 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:29:31.061 12:13:36 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:29:31.061 12:13:36 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:29:31.061 12:13:36 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:29:31.061 12:13:36 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:29:31.061 12:13:36 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:29:31.061 12:13:36 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:31.061 12:13:36 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:29:31.061 12:13:36 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:29:31.061 12:13:36 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:29:31.061 12:13:36 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:29:31.061 12:13:36 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:29:31.061 12:13:36 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:29:31.061 12:13:36 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:29:31.061 12:13:36 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:29:31.061 12:13:36 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:29:31.061 12:13:36 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:29:31.061 12:13:36 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:29:31.061 12:13:36 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:29:31.061 12:13:36 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:29:31.061 12:13:36 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:31.061 12:13:36 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:31.061 12:13:36 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:29:31.061 12:13:36 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:29:31.061 12:13:36 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:29:31.061 12:13:36 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:29:31.061 12:13:36 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:29:31.061 12:13:36 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:29:31.061 12:13:36 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:31.061 12:13:36 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:31.061 12:13:36 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:31.061 12:13:36 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:31.061 12:13:36 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:31.061 12:13:36 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:31.061 12:13:36 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:29:31.061 12:13:36 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:31.061 #define SPDK_CONFIG_H 00:29:31.061 #define SPDK_CONFIG_APPS 1 00:29:31.061 #define SPDK_CONFIG_ARCH native 00:29:31.061 #define SPDK_CONFIG_ASAN 1 00:29:31.061 #undef SPDK_CONFIG_AVAHI 00:29:31.061 #undef SPDK_CONFIG_CET 00:29:31.061 #define SPDK_CONFIG_COVERAGE 1 00:29:31.061 #define SPDK_CONFIG_CROSS_PREFIX 00:29:31.061 #undef SPDK_CONFIG_CRYPTO 00:29:31.061 #undef SPDK_CONFIG_CRYPTO_MLX5 00:29:31.061 #undef SPDK_CONFIG_CUSTOMOCF 00:29:31.061 #undef SPDK_CONFIG_DAOS 00:29:31.061 #define SPDK_CONFIG_DAOS_DIR 00:29:31.061 #define SPDK_CONFIG_DEBUG 1 00:29:31.061 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:29:31.061 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:29:31.061 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:29:31.061 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:29:31.061 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:31.061 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:31.061 #define SPDK_CONFIG_EXAMPLES 1 00:29:31.061 #undef SPDK_CONFIG_FC 00:29:31.061 #define SPDK_CONFIG_FC_PATH 00:29:31.061 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:31.061 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:31.061 #undef SPDK_CONFIG_FUSE 00:29:31.061 #undef SPDK_CONFIG_FUZZER 00:29:31.061 #define SPDK_CONFIG_FUZZER_LIB 00:29:31.061 #undef SPDK_CONFIG_GOLANG 00:29:31.061 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:29:31.061 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:31.062 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:31.062 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:31.062 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:31.062 #define SPDK_CONFIG_IDXD 1 00:29:31.062 #undef SPDK_CONFIG_IDXD_KERNEL 00:29:31.062 #undef SPDK_CONFIG_IPSEC_MB 00:29:31.062 #define SPDK_CONFIG_IPSEC_MB_DIR 00:29:31.062 #define SPDK_CONFIG_ISAL 1 00:29:31.062 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:31.062 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:31.062 #define SPDK_CONFIG_LIBDIR 00:29:31.062 #undef SPDK_CONFIG_LTO 00:29:31.062 #define SPDK_CONFIG_MAX_LCORES 00:29:31.062 #define SPDK_CONFIG_NVME_CUSE 1 00:29:31.062 #undef SPDK_CONFIG_OCF 00:29:31.062 #define SPDK_CONFIG_OCF_PATH 00:29:31.062 #define SPDK_CONFIG_OPENSSL_PATH 00:29:31.062 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:31.062 #undef SPDK_CONFIG_PGO_USE 00:29:31.062 #define SPDK_CONFIG_PREFIX /usr/local 00:29:31.062 #define SPDK_CONFIG_RAID5F 1 00:29:31.062 #undef SPDK_CONFIG_RBD 00:29:31.062 #define SPDK_CONFIG_RDMA 1 00:29:31.062 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:31.062 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:31.062 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:31.062 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:31.062 #undef SPDK_CONFIG_SHARED 00:29:31.062 #undef SPDK_CONFIG_SMA 00:29:31.062 #define SPDK_CONFIG_TESTS 1 00:29:31.062 #undef SPDK_CONFIG_TSAN 00:29:31.062 #undef SPDK_CONFIG_UBLK 00:29:31.062 #define SPDK_CONFIG_UBSAN 1 00:29:31.062 #define SPDK_CONFIG_UNIT_TESTS 1 00:29:31.062 #undef SPDK_CONFIG_URING 00:29:31.062 #define SPDK_CONFIG_URING_PATH 00:29:31.062 #undef SPDK_CONFIG_URING_ZNS 00:29:31.062 #undef SPDK_CONFIG_USDT 00:29:31.062 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:29:31.062 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:29:31.062 #undef SPDK_CONFIG_VFIO_USER 00:29:31.062 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:31.062 #define SPDK_CONFIG_VHOST 1 00:29:31.062 #define SPDK_CONFIG_VIRTIO 1 00:29:31.062 #undef SPDK_CONFIG_VTUNE 00:29:31.062 #define SPDK_CONFIG_VTUNE_DIR 00:29:31.062 #define SPDK_CONFIG_WERROR 1 00:29:31.062 #define SPDK_CONFIG_WPDK_DIR 00:29:31.062 #undef SPDK_CONFIG_XNVME 00:29:31.062 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:31.062 12:13:36 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:31.062 12:13:36 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:31.062 12:13:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.062 12:13:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.062 12:13:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.062 12:13:36 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:31.062 12:13:36 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:31.062 12:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:31.062 12:13:36 -- paths/export.sh@5 -- # export PATH 00:29:31.062 12:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:31.062 12:13:36 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:31.062 12:13:36 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:31.062 12:13:36 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:31.062 12:13:36 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:31.062 12:13:36 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:29:31.062 12:13:36 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:29:31.062 12:13:36 -- pm/common@16 -- # TEST_TAG=N/A 00:29:31.062 12:13:36 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:29:31.062 12:13:36 -- common/autotest_common.sh@52 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:29:31.062 12:13:36 -- common/autotest_common.sh@56 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:31.062 12:13:36 -- common/autotest_common.sh@58 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:29:31.062 12:13:36 -- common/autotest_common.sh@60 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:31.062 12:13:36 -- common/autotest_common.sh@62 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:29:31.062 12:13:36 -- common/autotest_common.sh@64 -- # : 00:29:31.062 12:13:36 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:29:31.062 12:13:36 -- common/autotest_common.sh@66 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:29:31.062 12:13:36 -- common/autotest_common.sh@68 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:29:31.062 12:13:36 -- common/autotest_common.sh@70 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:29:31.062 12:13:36 -- common/autotest_common.sh@72 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:31.062 12:13:36 -- common/autotest_common.sh@74 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:29:31.062 12:13:36 -- common/autotest_common.sh@76 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:29:31.062 12:13:36 -- common/autotest_common.sh@78 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:29:31.062 12:13:36 -- common/autotest_common.sh@80 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:29:31.062 12:13:36 -- common/autotest_common.sh@82 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:29:31.062 12:13:36 -- common/autotest_common.sh@84 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:29:31.062 12:13:36 -- common/autotest_common.sh@86 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:29:31.062 12:13:36 -- common/autotest_common.sh@88 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:29:31.062 12:13:36 -- common/autotest_common.sh@90 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:31.062 12:13:36 -- common/autotest_common.sh@92 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:29:31.062 12:13:36 -- common/autotest_common.sh@94 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:29:31.062 12:13:36 -- common/autotest_common.sh@96 -- # : rdma 00:29:31.062 12:13:36 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:31.062 12:13:36 -- common/autotest_common.sh@98 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:29:31.062 12:13:36 -- common/autotest_common.sh@100 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:29:31.062 12:13:36 -- common/autotest_common.sh@102 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:29:31.062 12:13:36 -- common/autotest_common.sh@104 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:29:31.062 12:13:36 -- common/autotest_common.sh@106 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:29:31.062 12:13:36 -- common/autotest_common.sh@108 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:29:31.062 12:13:36 -- common/autotest_common.sh@110 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:29:31.062 12:13:36 -- common/autotest_common.sh@112 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:31.062 12:13:36 -- common/autotest_common.sh@114 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:29:31.062 12:13:36 -- common/autotest_common.sh@116 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:29:31.062 12:13:36 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:29:31.062 12:13:36 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:31.062 12:13:36 -- common/autotest_common.sh@120 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:29:31.062 12:13:36 -- common/autotest_common.sh@122 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:29:31.062 12:13:36 -- common/autotest_common.sh@124 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:29:31.062 12:13:36 -- common/autotest_common.sh@126 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:29:31.062 12:13:36 -- common/autotest_common.sh@128 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:29:31.062 12:13:36 -- common/autotest_common.sh@130 -- # : 0 00:29:31.062 12:13:36 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:29:31.062 12:13:36 -- common/autotest_common.sh@132 -- # : v22.11.4 00:29:31.062 12:13:36 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:29:31.062 12:13:36 -- common/autotest_common.sh@134 -- # : true 00:29:31.062 12:13:36 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:29:31.062 12:13:36 -- common/autotest_common.sh@136 -- # : 1 00:29:31.062 12:13:36 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:29:31.063 12:13:36 -- common/autotest_common.sh@138 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:29:31.063 12:13:36 -- common/autotest_common.sh@140 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:29:31.063 12:13:36 -- common/autotest_common.sh@142 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:29:31.063 12:13:36 -- common/autotest_common.sh@144 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:29:31.063 12:13:36 -- common/autotest_common.sh@146 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:29:31.063 12:13:36 -- common/autotest_common.sh@148 -- # : 00:29:31.063 12:13:36 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:29:31.063 12:13:36 -- common/autotest_common.sh@150 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:29:31.063 12:13:36 -- common/autotest_common.sh@152 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:29:31.063 12:13:36 -- common/autotest_common.sh@154 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:29:31.063 12:13:36 -- common/autotest_common.sh@156 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:29:31.063 12:13:36 -- common/autotest_common.sh@158 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:29:31.063 12:13:36 -- common/autotest_common.sh@160 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:29:31.063 12:13:36 -- common/autotest_common.sh@163 -- # : 00:29:31.063 12:13:36 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:29:31.063 12:13:36 -- common/autotest_common.sh@165 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:29:31.063 12:13:36 -- common/autotest_common.sh@167 -- # : 0 00:29:31.063 12:13:36 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:31.063 12:13:36 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:31.063 12:13:36 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:31.063 12:13:36 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:31.063 12:13:36 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:31.063 12:13:36 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:31.063 12:13:36 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:31.063 12:13:36 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:29:31.063 12:13:36 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:31.063 12:13:36 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:31.063 12:13:36 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:31.063 12:13:36 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:31.063 12:13:36 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:31.063 12:13:36 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:29:31.063 12:13:36 -- common/autotest_common.sh@196 -- # cat 00:29:31.063 12:13:36 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:29:31.063 12:13:36 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:31.063 12:13:36 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:31.063 12:13:36 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:31.063 12:13:36 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:31.063 12:13:36 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:29:31.063 12:13:36 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:29:31.063 12:13:36 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:31.063 12:13:36 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:31.063 12:13:36 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:31.063 12:13:36 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:31.063 12:13:36 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:29:31.063 12:13:36 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:29:31.063 12:13:36 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:31.063 12:13:36 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:31.063 12:13:36 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:31.063 12:13:36 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:31.063 12:13:36 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:31.063 12:13:36 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:31.063 12:13:36 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:29:31.063 12:13:36 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:29:31.063 12:13:36 -- common/autotest_common.sh@249 -- # _LCOV= 00:29:31.063 12:13:36 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:29:31.063 12:13:36 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:29:31.063 12:13:36 -- common/autotest_common.sh@255 -- # lcov_opt= 00:29:31.063 12:13:36 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:29:31.063 12:13:36 -- common/autotest_common.sh@259 -- # export valgrind= 00:29:31.063 12:13:36 -- common/autotest_common.sh@259 -- # valgrind= 00:29:31.063 12:13:36 -- common/autotest_common.sh@265 -- # uname -s 00:29:31.063 12:13:36 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:29:31.063 12:13:36 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:29:31.063 12:13:36 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:29:31.063 12:13:36 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:29:31.063 12:13:36 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@275 -- # MAKE=make 00:29:31.063 12:13:36 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:29:31.063 12:13:36 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:29:31.063 12:13:36 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:29:31.063 12:13:36 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:29:31.063 12:13:36 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:29:31.063 12:13:36 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:29:31.063 12:13:36 -- common/autotest_common.sh@319 -- # [[ -z 144469 ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@319 -- # kill -0 144469 00:29:31.063 12:13:36 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:29:31.063 12:13:36 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:29:31.063 12:13:36 -- common/autotest_common.sh@332 -- # local mount target_dir 00:29:31.063 12:13:36 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:29:31.063 12:13:36 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:29:31.063 12:13:36 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:29:31.063 12:13:36 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:29:31.063 12:13:36 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.rB5VsZ 00:29:31.063 12:13:36 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:31.063 12:13:36 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:29:31.063 12:13:36 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.rB5VsZ/tests/interrupt /tmp/spdk.rB5VsZ 00:29:31.063 12:13:36 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:29:31.063 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.063 12:13:36 -- common/autotest_common.sh@328 -- # df -T 00:29:31.063 12:13:36 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:29:31.063 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:31.063 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:31.063 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248944128 00:29:31.063 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:29:31.063 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=4734976 00:29:31.063 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.063 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=8792215552 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:29:31.064 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=11807801344 00:29:31.064 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267133952 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268391424 00:29:31.064 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:29:31.064 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:29:31.064 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:29:31.064 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:29:31.064 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:29:31.064 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253670912 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253675008 00:29:31.064 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:29:31.064 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:29:31.064 12:13:36 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # avails["$mount"]=93591212032 00:29:31.064 12:13:36 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:29:31.064 12:13:36 -- common/autotest_common.sh@364 -- # uses["$mount"]=6111567872 00:29:31.064 12:13:36 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:29:31.064 12:13:36 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:29:31.064 * Looking for test storage... 00:29:31.064 12:13:36 -- common/autotest_common.sh@369 -- # local target_space new_size 00:29:31.064 12:13:36 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:29:31.064 12:13:36 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:31.064 12:13:36 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.064 12:13:36 -- common/autotest_common.sh@373 -- # mount=/ 00:29:31.064 12:13:36 -- common/autotest_common.sh@375 -- # target_space=8792215552 00:29:31.064 12:13:36 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:29:31.064 12:13:36 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:29:31.064 12:13:36 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:29:31.064 12:13:36 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:29:31.064 12:13:36 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:29:31.064 12:13:36 -- common/autotest_common.sh@382 -- # new_size=14022393856 00:29:31.064 12:13:36 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:31.064 12:13:36 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.064 12:13:36 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.064 12:13:36 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:31.064 12:13:36 -- common/autotest_common.sh@390 -- # return 0 00:29:31.064 12:13:36 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:29:31.064 12:13:36 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:29:31.064 12:13:36 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:31.064 12:13:36 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:31.064 12:13:36 -- common/autotest_common.sh@1682 -- # true 00:29:31.064 12:13:36 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:29:31.064 12:13:36 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:31.064 12:13:36 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:31.064 12:13:36 -- common/autotest_common.sh@27 -- # exec 00:29:31.064 12:13:36 -- common/autotest_common.sh@29 -- # exec 00:29:31.064 12:13:36 -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:31.064 12:13:36 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:31.064 12:13:36 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:31.064 12:13:36 -- common/autotest_common.sh@18 -- # set -x 00:29:31.064 12:13:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:31.323 12:13:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:31.323 12:13:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:31.323 12:13:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:31.323 12:13:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:31.323 12:13:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:31.323 12:13:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:31.323 12:13:36 -- scripts/common.sh@335 -- # IFS=.-: 00:29:31.323 12:13:36 -- scripts/common.sh@335 -- # read -ra ver1 00:29:31.323 12:13:36 -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.323 12:13:36 -- scripts/common.sh@336 -- # read -ra ver2 00:29:31.323 12:13:36 -- scripts/common.sh@337 -- # local 'op=<' 00:29:31.323 12:13:36 -- scripts/common.sh@339 -- # ver1_l=2 00:29:31.323 12:13:36 -- scripts/common.sh@340 -- # ver2_l=1 00:29:31.323 12:13:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:31.323 12:13:36 -- scripts/common.sh@343 -- # case "$op" in 00:29:31.323 12:13:36 -- scripts/common.sh@344 -- # : 1 00:29:31.323 12:13:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:31.323 12:13:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.323 12:13:36 -- scripts/common.sh@364 -- # decimal 1 00:29:31.323 12:13:36 -- scripts/common.sh@352 -- # local d=1 00:29:31.323 12:13:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.323 12:13:36 -- scripts/common.sh@354 -- # echo 1 00:29:31.323 12:13:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:31.323 12:13:36 -- scripts/common.sh@365 -- # decimal 2 00:29:31.323 12:13:36 -- scripts/common.sh@352 -- # local d=2 00:29:31.323 12:13:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.323 12:13:36 -- scripts/common.sh@354 -- # echo 2 00:29:31.323 12:13:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:31.323 12:13:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:31.323 12:13:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:31.323 12:13:36 -- scripts/common.sh@367 -- # return 0 00:29:31.323 12:13:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.323 12:13:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:31.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.323 --rc genhtml_branch_coverage=1 00:29:31.323 --rc genhtml_function_coverage=1 00:29:31.323 --rc genhtml_legend=1 00:29:31.323 --rc geninfo_all_blocks=1 00:29:31.323 --rc geninfo_unexecuted_blocks=1 00:29:31.323 00:29:31.323 ' 00:29:31.323 12:13:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:31.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.323 --rc genhtml_branch_coverage=1 00:29:31.323 --rc genhtml_function_coverage=1 00:29:31.323 --rc genhtml_legend=1 00:29:31.323 --rc geninfo_all_blocks=1 00:29:31.323 --rc geninfo_unexecuted_blocks=1 00:29:31.323 00:29:31.323 ' 00:29:31.323 12:13:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:31.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.323 --rc genhtml_branch_coverage=1 00:29:31.323 --rc genhtml_function_coverage=1 00:29:31.323 --rc genhtml_legend=1 00:29:31.323 --rc geninfo_all_blocks=1 00:29:31.323 --rc geninfo_unexecuted_blocks=1 00:29:31.323 00:29:31.323 ' 00:29:31.323 12:13:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:31.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.324 --rc genhtml_branch_coverage=1 00:29:31.324 --rc genhtml_function_coverage=1 00:29:31.324 --rc genhtml_legend=1 00:29:31.324 --rc geninfo_all_blocks=1 00:29:31.324 --rc geninfo_unexecuted_blocks=1 00:29:31.324 00:29:31.324 ' 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:31.324 12:13:36 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:31.324 12:13:36 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:31.324 12:13:36 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144535 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:31.324 12:13:36 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144535 /var/tmp/spdk.sock 00:29:31.324 12:13:36 -- common/autotest_common.sh@829 -- # '[' -z 144535 ']' 00:29:31.324 12:13:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.324 12:13:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.324 12:13:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.324 12:13:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.324 12:13:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.324 [2024-11-29 12:13:36.716132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:31.324 [2024-11-29 12:13:36.716346] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144535 ] 00:29:31.583 [2024-11-29 12:13:36.885849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.583 [2024-11-29 12:13:36.989247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.583 [2024-11-29 12:13:36.989364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.583 [2024-11-29 12:13:36.989698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.583 [2024-11-29 12:13:37.079757] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:32.517 12:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.517 12:13:37 -- common/autotest_common.sh@862 -- # return 0 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:29:32.517 12:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.517 12:13:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:29:32.517 12:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:29:32.517 "name": "app_thread", 00:29:32.517 "id": 1, 00:29:32.517 "active_pollers": [], 00:29:32.517 "timed_pollers": [ 00:29:32.517 { 00:29:32.517 "name": "rpc_subsystem_poll", 00:29:32.517 "id": 1, 00:29:32.517 "state": "waiting", 00:29:32.517 "run_count": 0, 00:29:32.517 "busy_count": 0, 00:29:32.517 "period_ticks": 8800000 00:29:32.517 } 00:29:32.517 ], 00:29:32.517 "paused_pollers": [] 00:29:32.517 }' 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:29:32.517 12:13:37 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:29:32.517 12:13:37 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:32.517 12:13:37 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:32.517 12:13:37 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:32.517 5000+0 records in 00:29:32.517 5000+0 records out 00:29:32.517 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0283522 s, 361 MB/s 00:29:32.518 12:13:37 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:29:32.776 AIO0 00:29:32.776 12:13:38 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:33.034 12:13:38 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:29:33.292 12:13:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.292 12:13:38 -- common/autotest_common.sh@10 -- # set +x 00:29:33.292 12:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:29:33.292 "name": "app_thread", 00:29:33.292 "id": 1, 00:29:33.292 "active_pollers": [], 00:29:33.292 "timed_pollers": [ 00:29:33.292 { 00:29:33.292 "name": "rpc_subsystem_poll", 00:29:33.292 "id": 1, 00:29:33.292 "state": "waiting", 00:29:33.292 "run_count": 0, 00:29:33.292 "busy_count": 0, 00:29:33.292 "period_ticks": 8800000 00:29:33.292 } 00:29:33.292 ], 00:29:33.292 "paused_pollers": [] 00:29:33.292 }' 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:29:33.292 12:13:38 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 144535 00:29:33.292 12:13:38 -- common/autotest_common.sh@936 -- # '[' -z 144535 ']' 00:29:33.292 12:13:38 -- common/autotest_common.sh@940 -- # kill -0 144535 00:29:33.292 12:13:38 -- common/autotest_common.sh@941 -- # uname 00:29:33.292 12:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:33.292 12:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144535 00:29:33.292 12:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:33.292 12:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:33.292 12:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144535' 00:29:33.292 killing process with pid 144535 00:29:33.292 12:13:38 -- common/autotest_common.sh@955 -- # kill 144535 00:29:33.292 12:13:38 -- common/autotest_common.sh@960 -- # wait 144535 00:29:33.858 12:13:39 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:29:33.858 12:13:39 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:33.858 00:29:33.858 real 0m2.847s 00:29:33.858 user 0m1.976s 00:29:33.858 sys 0m0.542s 00:29:33.858 12:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:33.858 12:13:39 -- common/autotest_common.sh@10 -- # set +x 00:29:33.858 ************************************ 00:29:33.858 END TEST reap_unregistered_poller 00:29:33.858 ************************************ 00:29:33.858 12:13:39 -- spdk/autotest.sh@191 -- # uname -s 00:29:33.858 12:13:39 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:29:33.858 12:13:39 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:29:33.858 12:13:39 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:29:33.858 12:13:39 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:29:33.858 12:13:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:33.858 12:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:33.858 12:13:39 -- common/autotest_common.sh@10 -- # set +x 00:29:33.858 ************************************ 00:29:33.858 START TEST spdk_dd 00:29:33.858 ************************************ 00:29:33.858 12:13:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:29:33.858 * Looking for test storage... 00:29:33.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:33.858 12:13:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:33.858 12:13:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:33.858 12:13:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:33.858 12:13:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:33.858 12:13:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:33.858 12:13:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:33.858 12:13:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:33.858 12:13:39 -- scripts/common.sh@335 -- # IFS=.-: 00:29:33.858 12:13:39 -- scripts/common.sh@335 -- # read -ra ver1 00:29:33.858 12:13:39 -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.858 12:13:39 -- scripts/common.sh@336 -- # read -ra ver2 00:29:33.858 12:13:39 -- scripts/common.sh@337 -- # local 'op=<' 00:29:33.858 12:13:39 -- scripts/common.sh@339 -- # ver1_l=2 00:29:33.858 12:13:39 -- scripts/common.sh@340 -- # ver2_l=1 00:29:33.858 12:13:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:33.858 12:13:39 -- scripts/common.sh@343 -- # case "$op" in 00:29:33.858 12:13:39 -- scripts/common.sh@344 -- # : 1 00:29:33.858 12:13:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:33.858 12:13:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.858 12:13:39 -- scripts/common.sh@364 -- # decimal 1 00:29:33.858 12:13:39 -- scripts/common.sh@352 -- # local d=1 00:29:33.858 12:13:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.858 12:13:39 -- scripts/common.sh@354 -- # echo 1 00:29:33.858 12:13:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:33.858 12:13:39 -- scripts/common.sh@365 -- # decimal 2 00:29:33.858 12:13:39 -- scripts/common.sh@352 -- # local d=2 00:29:33.858 12:13:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.858 12:13:39 -- scripts/common.sh@354 -- # echo 2 00:29:33.858 12:13:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:33.858 12:13:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:33.858 12:13:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:33.858 12:13:39 -- scripts/common.sh@367 -- # return 0 00:29:33.858 12:13:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.858 12:13:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.858 --rc genhtml_branch_coverage=1 00:29:33.858 --rc genhtml_function_coverage=1 00:29:33.858 --rc genhtml_legend=1 00:29:33.858 --rc geninfo_all_blocks=1 00:29:33.858 --rc geninfo_unexecuted_blocks=1 00:29:33.858 00:29:33.858 ' 00:29:33.858 12:13:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.858 --rc genhtml_branch_coverage=1 00:29:33.858 --rc genhtml_function_coverage=1 00:29:33.858 --rc genhtml_legend=1 00:29:33.858 --rc geninfo_all_blocks=1 00:29:33.858 --rc geninfo_unexecuted_blocks=1 00:29:33.858 00:29:33.858 ' 00:29:33.858 12:13:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.858 --rc genhtml_branch_coverage=1 00:29:33.858 --rc genhtml_function_coverage=1 00:29:33.858 --rc genhtml_legend=1 00:29:33.858 --rc geninfo_all_blocks=1 00:29:33.858 --rc geninfo_unexecuted_blocks=1 00:29:33.858 00:29:33.858 ' 00:29:33.859 12:13:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.859 --rc genhtml_branch_coverage=1 00:29:33.859 --rc genhtml_function_coverage=1 00:29:33.859 --rc genhtml_legend=1 00:29:33.859 --rc geninfo_all_blocks=1 00:29:33.859 --rc geninfo_unexecuted_blocks=1 00:29:33.859 00:29:33.859 ' 00:29:33.859 12:13:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:33.859 12:13:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.859 12:13:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.859 12:13:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.859 12:13:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:33.859 12:13:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:33.859 12:13:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:33.859 12:13:39 -- paths/export.sh@5 -- # export PATH 00:29:33.859 12:13:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:33.859 12:13:39 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:34.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:34.391 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:35.327 12:13:40 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:29:35.327 12:13:40 -- dd/dd.sh@11 -- # nvme_in_userspace 00:29:35.327 12:13:40 -- scripts/common.sh@311 -- # local bdf bdfs 00:29:35.327 12:13:40 -- scripts/common.sh@312 -- # local nvmes 00:29:35.327 12:13:40 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:29:35.327 12:13:40 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:29:35.327 12:13:40 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:29:35.327 12:13:40 -- scripts/common.sh@297 -- # local bdf= 00:29:35.327 12:13:40 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:29:35.327 12:13:40 -- scripts/common.sh@232 -- # local class 00:29:35.327 12:13:40 -- scripts/common.sh@233 -- # local subclass 00:29:35.327 12:13:40 -- scripts/common.sh@234 -- # local progif 00:29:35.327 12:13:40 -- scripts/common.sh@235 -- # printf %02x 1 00:29:35.327 12:13:40 -- scripts/common.sh@235 -- # class=01 00:29:35.327 12:13:40 -- scripts/common.sh@236 -- # printf %02x 8 00:29:35.327 12:13:40 -- scripts/common.sh@236 -- # subclass=08 00:29:35.327 12:13:40 -- scripts/common.sh@237 -- # printf %02x 2 00:29:35.327 12:13:40 -- scripts/common.sh@237 -- # progif=02 00:29:35.327 12:13:40 -- scripts/common.sh@239 -- # hash lspci 00:29:35.327 12:13:40 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:29:35.327 12:13:40 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:29:35.327 12:13:40 -- scripts/common.sh@242 -- # grep -i -- -p02 00:29:35.327 12:13:40 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:29:35.327 12:13:40 -- scripts/common.sh@244 -- # tr -d '"' 00:29:35.327 12:13:40 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:35.327 12:13:40 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:29:35.327 12:13:40 -- scripts/common.sh@15 -- # local i 00:29:35.327 12:13:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:35.327 12:13:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:35.327 12:13:40 -- scripts/common.sh@24 -- # return 0 00:29:35.327 12:13:40 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:29:35.327 12:13:40 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:29:35.327 12:13:40 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:29:35.327 12:13:40 -- scripts/common.sh@322 -- # uname -s 00:29:35.327 12:13:40 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:29:35.327 12:13:40 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:29:35.327 12:13:40 -- scripts/common.sh@327 -- # (( 1 )) 00:29:35.327 12:13:40 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:29:35.327 12:13:40 -- dd/dd.sh@13 -- # check_liburing 00:29:35.327 12:13:40 -- dd/common.sh@139 -- # local lib so 00:29:35.327 12:13:40 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:29:35.327 12:13:40 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:29:35.327 12:13:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:35.327 12:13:40 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:29:35.327 12:13:40 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:29:35.327 12:13:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:35.327 12:13:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:35.327 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:29:35.327 ************************************ 00:29:35.327 START TEST spdk_dd_basic_rw 00:29:35.327 ************************************ 00:29:35.327 12:13:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:29:35.327 * Looking for test storage... 00:29:35.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:35.327 12:13:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:35.327 12:13:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:35.327 12:13:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:35.586 12:13:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:35.586 12:13:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:35.586 12:13:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:35.586 12:13:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:35.586 12:13:40 -- scripts/common.sh@335 -- # IFS=.-: 00:29:35.586 12:13:40 -- scripts/common.sh@335 -- # read -ra ver1 00:29:35.586 12:13:40 -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.586 12:13:40 -- scripts/common.sh@336 -- # read -ra ver2 00:29:35.586 12:13:40 -- scripts/common.sh@337 -- # local 'op=<' 00:29:35.586 12:13:40 -- scripts/common.sh@339 -- # ver1_l=2 00:29:35.586 12:13:40 -- scripts/common.sh@340 -- # ver2_l=1 00:29:35.586 12:13:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:35.586 12:13:40 -- scripts/common.sh@343 -- # case "$op" in 00:29:35.586 12:13:40 -- scripts/common.sh@344 -- # : 1 00:29:35.586 12:13:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:35.586 12:13:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.586 12:13:40 -- scripts/common.sh@364 -- # decimal 1 00:29:35.586 12:13:40 -- scripts/common.sh@352 -- # local d=1 00:29:35.586 12:13:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.586 12:13:40 -- scripts/common.sh@354 -- # echo 1 00:29:35.586 12:13:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:35.586 12:13:40 -- scripts/common.sh@365 -- # decimal 2 00:29:35.586 12:13:40 -- scripts/common.sh@352 -- # local d=2 00:29:35.586 12:13:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.586 12:13:40 -- scripts/common.sh@354 -- # echo 2 00:29:35.586 12:13:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:35.586 12:13:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:35.586 12:13:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:35.586 12:13:40 -- scripts/common.sh@367 -- # return 0 00:29:35.586 12:13:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.586 12:13:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:35.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.586 --rc genhtml_branch_coverage=1 00:29:35.586 --rc genhtml_function_coverage=1 00:29:35.586 --rc genhtml_legend=1 00:29:35.586 --rc geninfo_all_blocks=1 00:29:35.586 --rc geninfo_unexecuted_blocks=1 00:29:35.586 00:29:35.586 ' 00:29:35.586 12:13:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:35.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.586 --rc genhtml_branch_coverage=1 00:29:35.586 --rc genhtml_function_coverage=1 00:29:35.586 --rc genhtml_legend=1 00:29:35.586 --rc geninfo_all_blocks=1 00:29:35.586 --rc geninfo_unexecuted_blocks=1 00:29:35.586 00:29:35.586 ' 00:29:35.586 12:13:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:35.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.587 --rc genhtml_branch_coverage=1 00:29:35.587 --rc genhtml_function_coverage=1 00:29:35.587 --rc genhtml_legend=1 00:29:35.587 --rc geninfo_all_blocks=1 00:29:35.587 --rc geninfo_unexecuted_blocks=1 00:29:35.587 00:29:35.587 ' 00:29:35.587 12:13:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:35.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.587 --rc genhtml_branch_coverage=1 00:29:35.587 --rc genhtml_function_coverage=1 00:29:35.587 --rc genhtml_legend=1 00:29:35.587 --rc geninfo_all_blocks=1 00:29:35.587 --rc geninfo_unexecuted_blocks=1 00:29:35.587 00:29:35.587 ' 00:29:35.587 12:13:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:35.587 12:13:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.587 12:13:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.587 12:13:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.587 12:13:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:35.587 12:13:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:35.587 12:13:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:35.587 12:13:40 -- paths/export.sh@5 -- # export PATH 00:29:35.587 12:13:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:35.587 12:13:40 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:29:35.587 12:13:40 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:29:35.587 12:13:40 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:29:35.587 12:13:40 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:29:35.587 12:13:40 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:29:35.587 12:13:40 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:29:35.587 12:13:40 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:29:35.587 12:13:40 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:35.587 12:13:40 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:35.587 12:13:40 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:29:35.587 12:13:40 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:29:35.587 12:13:40 -- dd/common.sh@126 -- # mapfile -t id 00:29:35.587 12:13:40 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:29:35.848 12:13:41 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 97 Data Units Written: 7 Host Read Commands: 2107 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:29:35.848 12:13:41 -- dd/common.sh@130 -- # lbaf=04 00:29:35.849 12:13:41 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 97 Data Units Written: 7 Host Read Commands: 2107 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:29:35.849 12:13:41 -- dd/common.sh@132 -- # lbaf=4096 00:29:35.849 12:13:41 -- dd/common.sh@134 -- # echo 4096 00:29:35.849 12:13:41 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:29:35.849 12:13:41 -- dd/basic_rw.sh@96 -- # : 00:29:35.849 12:13:41 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:35.849 12:13:41 -- dd/basic_rw.sh@96 -- # gen_conf 00:29:35.849 12:13:41 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:29:35.849 12:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:35.849 12:13:41 -- dd/common.sh@31 -- # xtrace_disable 00:29:35.849 12:13:41 -- common/autotest_common.sh@10 -- # set +x 00:29:35.849 12:13:41 -- common/autotest_common.sh@10 -- # set +x 00:29:35.849 ************************************ 00:29:35.849 START TEST dd_bs_lt_native_bs 00:29:35.849 ************************************ 00:29:35.849 12:13:41 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:35.849 12:13:41 -- common/autotest_common.sh@650 -- # local es=0 00:29:35.849 12:13:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:35.849 12:13:41 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:35.849 12:13:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:35.849 12:13:41 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:35.849 12:13:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:35.849 12:13:41 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:35.849 12:13:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:35.849 12:13:41 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:35.849 12:13:41 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:35.849 12:13:41 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:35.849 { 00:29:35.849 "subsystems": [ 00:29:35.849 { 00:29:35.849 "subsystem": "bdev", 00:29:35.849 "config": [ 00:29:35.849 { 00:29:35.849 "params": { 00:29:35.849 "trtype": "pcie", 00:29:35.849 "traddr": "0000:00:06.0", 00:29:35.849 "name": "Nvme0" 00:29:35.849 }, 00:29:35.849 "method": "bdev_nvme_attach_controller" 00:29:35.849 }, 00:29:35.849 { 00:29:35.849 "method": "bdev_wait_for_examine" 00:29:35.849 } 00:29:35.849 ] 00:29:35.849 } 00:29:35.849 ] 00:29:35.849 } 00:29:35.849 [2024-11-29 12:13:41.234395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:35.849 [2024-11-29 12:13:41.234958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144844 ] 00:29:36.108 [2024-11-29 12:13:41.385959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.108 [2024-11-29 12:13:41.487296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.367 [2024-11-29 12:13:41.651082] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:29:36.367 [2024-11-29 12:13:41.651210] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:36.367 [2024-11-29 12:13:41.786572] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:36.625 12:13:41 -- common/autotest_common.sh@653 -- # es=234 00:29:36.625 12:13:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:36.625 12:13:41 -- common/autotest_common.sh@662 -- # es=106 00:29:36.625 12:13:41 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:36.625 12:13:41 -- common/autotest_common.sh@670 -- # es=1 00:29:36.625 12:13:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:36.625 00:29:36.625 real 0m0.773s 00:29:36.625 user 0m0.521s 00:29:36.625 sys 0m0.212s 00:29:36.625 12:13:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:36.625 12:13:41 -- common/autotest_common.sh@10 -- # set +x 00:29:36.625 ************************************ 00:29:36.625 END TEST dd_bs_lt_native_bs 00:29:36.625 ************************************ 00:29:36.625 12:13:41 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:29:36.625 12:13:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:36.625 12:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:36.625 12:13:41 -- common/autotest_common.sh@10 -- # set +x 00:29:36.625 ************************************ 00:29:36.625 START TEST dd_rw 00:29:36.625 ************************************ 00:29:36.625 12:13:41 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:29:36.625 12:13:41 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:29:36.625 12:13:41 -- dd/basic_rw.sh@12 -- # local count size 00:29:36.625 12:13:41 -- dd/basic_rw.sh@13 -- # local qds bss 00:29:36.625 12:13:41 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:29:36.625 12:13:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:29:36.625 12:13:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:29:36.625 12:13:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:29:36.625 12:13:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:29:36.625 12:13:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:29:36.625 12:13:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:29:36.625 12:13:41 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:29:36.625 12:13:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:36.625 12:13:41 -- dd/basic_rw.sh@23 -- # count=15 00:29:36.625 12:13:41 -- dd/basic_rw.sh@24 -- # count=15 00:29:36.625 12:13:41 -- dd/basic_rw.sh@25 -- # size=61440 00:29:36.625 12:13:41 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:29:36.625 12:13:41 -- dd/common.sh@98 -- # xtrace_disable 00:29:36.625 12:13:41 -- common/autotest_common.sh@10 -- # set +x 00:29:37.192 12:13:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:29:37.192 12:13:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:37.192 12:13:42 -- dd/common.sh@31 -- # xtrace_disable 00:29:37.192 12:13:42 -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 [2024-11-29 12:13:42.737793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:37.451 [2024-11-29 12:13:42.738045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144889 ] 00:29:37.451 { 00:29:37.451 "subsystems": [ 00:29:37.451 { 00:29:37.451 "subsystem": "bdev", 00:29:37.451 "config": [ 00:29:37.451 { 00:29:37.451 "params": { 00:29:37.451 "trtype": "pcie", 00:29:37.451 "traddr": "0000:00:06.0", 00:29:37.451 "name": "Nvme0" 00:29:37.451 }, 00:29:37.451 "method": "bdev_nvme_attach_controller" 00:29:37.451 }, 00:29:37.451 { 00:29:37.451 "method": "bdev_wait_for_examine" 00:29:37.451 } 00:29:37.451 ] 00:29:37.451 } 00:29:37.451 ] 00:29:37.451 } 00:29:37.451 [2024-11-29 12:13:42.889913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.716 [2024-11-29 12:13:42.985660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.716  [2024-11-29T12:13:43.484Z] Copying: 60/60 [kB] (average 19 MBps) 00:29:37.973 00:29:37.973 12:13:43 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:29:37.973 12:13:43 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:37.973 12:13:43 -- dd/common.sh@31 -- # xtrace_disable 00:29:37.973 12:13:43 -- common/autotest_common.sh@10 -- # set +x 00:29:38.231 [2024-11-29 12:13:43.519257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:38.231 [2024-11-29 12:13:43.519559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144907 ] 00:29:38.231 { 00:29:38.231 "subsystems": [ 00:29:38.231 { 00:29:38.231 "subsystem": "bdev", 00:29:38.231 "config": [ 00:29:38.231 { 00:29:38.231 "params": { 00:29:38.231 "trtype": "pcie", 00:29:38.231 "traddr": "0000:00:06.0", 00:29:38.231 "name": "Nvme0" 00:29:38.231 }, 00:29:38.232 "method": "bdev_nvme_attach_controller" 00:29:38.232 }, 00:29:38.232 { 00:29:38.232 "method": "bdev_wait_for_examine" 00:29:38.232 } 00:29:38.232 ] 00:29:38.232 } 00:29:38.232 ] 00:29:38.232 } 00:29:38.232 [2024-11-29 12:13:43.674833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.490 [2024-11-29 12:13:43.770175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.490  [2024-11-29T12:13:44.260Z] Copying: 60/60 [kB] (average 14 MBps) 00:29:38.749 00:29:38.749 12:13:44 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:38.749 12:13:44 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:29:38.749 12:13:44 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:38.749 12:13:44 -- dd/common.sh@11 -- # local nvme_ref= 00:29:38.749 12:13:44 -- dd/common.sh@12 -- # local size=61440 00:29:38.749 12:13:44 -- dd/common.sh@14 -- # local bs=1048576 00:29:38.750 12:13:44 -- dd/common.sh@15 -- # local count=1 00:29:38.750 12:13:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:38.750 12:13:44 -- dd/common.sh@18 -- # gen_conf 00:29:38.750 12:13:44 -- dd/common.sh@31 -- # xtrace_disable 00:29:38.750 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:29:39.008 [2024-11-29 12:13:44.283240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:39.008 [2024-11-29 12:13:44.283485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144928 ] 00:29:39.008 { 00:29:39.008 "subsystems": [ 00:29:39.008 { 00:29:39.008 "subsystem": "bdev", 00:29:39.008 "config": [ 00:29:39.008 { 00:29:39.008 "params": { 00:29:39.008 "trtype": "pcie", 00:29:39.008 "traddr": "0000:00:06.0", 00:29:39.008 "name": "Nvme0" 00:29:39.008 }, 00:29:39.008 "method": "bdev_nvme_attach_controller" 00:29:39.008 }, 00:29:39.008 { 00:29:39.008 "method": "bdev_wait_for_examine" 00:29:39.008 } 00:29:39.008 ] 00:29:39.008 } 00:29:39.008 ] 00:29:39.008 } 00:29:39.008 [2024-11-29 12:13:44.431348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.267 [2024-11-29 12:13:44.527749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.267  [2024-11-29T12:13:45.036Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:29:39.525 00:29:39.525 12:13:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:39.525 12:13:44 -- dd/basic_rw.sh@23 -- # count=15 00:29:39.525 12:13:44 -- dd/basic_rw.sh@24 -- # count=15 00:29:39.525 12:13:44 -- dd/basic_rw.sh@25 -- # size=61440 00:29:39.525 12:13:44 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:29:39.525 12:13:44 -- dd/common.sh@98 -- # xtrace_disable 00:29:39.525 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:29:40.465 12:13:45 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:29:40.465 12:13:45 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:40.465 12:13:45 -- dd/common.sh@31 -- # xtrace_disable 00:29:40.465 12:13:45 -- common/autotest_common.sh@10 -- # set +x 00:29:40.465 [2024-11-29 12:13:45.712034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:40.465 [2024-11-29 12:13:45.712253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144948 ] 00:29:40.465 { 00:29:40.465 "subsystems": [ 00:29:40.465 { 00:29:40.465 "subsystem": "bdev", 00:29:40.465 "config": [ 00:29:40.465 { 00:29:40.465 "params": { 00:29:40.465 "trtype": "pcie", 00:29:40.465 "traddr": "0000:00:06.0", 00:29:40.465 "name": "Nvme0" 00:29:40.465 }, 00:29:40.465 "method": "bdev_nvme_attach_controller" 00:29:40.465 }, 00:29:40.465 { 00:29:40.465 "method": "bdev_wait_for_examine" 00:29:40.465 } 00:29:40.465 ] 00:29:40.465 } 00:29:40.465 ] 00:29:40.465 } 00:29:40.465 [2024-11-29 12:13:45.860813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.465 [2024-11-29 12:13:45.963695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.725  [2024-11-29T12:13:46.494Z] Copying: 60/60 [kB] (average 58 MBps) 00:29:40.983 00:29:40.983 12:13:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:29:40.983 12:13:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:40.983 12:13:46 -- dd/common.sh@31 -- # xtrace_disable 00:29:40.983 12:13:46 -- common/autotest_common.sh@10 -- # set +x 00:29:40.983 [2024-11-29 12:13:46.492911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:40.983 [2024-11-29 12:13:46.493174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144969 ] 00:29:40.983 { 00:29:40.983 "subsystems": [ 00:29:40.983 { 00:29:40.983 "subsystem": "bdev", 00:29:40.983 "config": [ 00:29:40.983 { 00:29:40.983 "params": { 00:29:40.983 "trtype": "pcie", 00:29:40.983 "traddr": "0000:00:06.0", 00:29:40.983 "name": "Nvme0" 00:29:40.983 }, 00:29:40.983 "method": "bdev_nvme_attach_controller" 00:29:40.983 }, 00:29:40.983 { 00:29:40.983 "method": "bdev_wait_for_examine" 00:29:40.983 } 00:29:40.983 ] 00:29:40.983 } 00:29:40.983 ] 00:29:40.983 } 00:29:41.240 [2024-11-29 12:13:46.640613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.240 [2024-11-29 12:13:46.740758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.498  [2024-11-29T12:13:47.268Z] Copying: 60/60 [kB] (average 58 MBps) 00:29:41.757 00:29:41.757 12:13:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:41.757 12:13:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:29:41.757 12:13:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:41.757 12:13:47 -- dd/common.sh@11 -- # local nvme_ref= 00:29:41.757 12:13:47 -- dd/common.sh@12 -- # local size=61440 00:29:41.757 12:13:47 -- dd/common.sh@14 -- # local bs=1048576 00:29:41.757 12:13:47 -- dd/common.sh@15 -- # local count=1 00:29:41.757 12:13:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:41.757 12:13:47 -- dd/common.sh@18 -- # gen_conf 00:29:41.757 12:13:47 -- dd/common.sh@31 -- # xtrace_disable 00:29:41.757 12:13:47 -- common/autotest_common.sh@10 -- # set +x 00:29:41.757 { 00:29:41.757 "subsystems": [ 00:29:41.757 { 00:29:41.757 "subsystem": "bdev", 00:29:41.757 "config": [ 00:29:41.757 { 00:29:41.757 "params": { 00:29:41.757 "trtype": "pcie", 00:29:41.757 "traddr": "0000:00:06.0", 00:29:41.757 "name": "Nvme0" 00:29:41.757 }, 00:29:41.757 "method": "bdev_nvme_attach_controller" 00:29:41.757 }, 00:29:41.757 { 00:29:41.757 "method": "bdev_wait_for_examine" 00:29:41.757 } 00:29:41.757 ] 00:29:41.757 } 00:29:41.757 ] 00:29:41.757 } 00:29:41.757 [2024-11-29 12:13:47.266387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:41.757 [2024-11-29 12:13:47.266710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144986 ] 00:29:42.016 [2024-11-29 12:13:47.425062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.016 [2024-11-29 12:13:47.526715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.275  [2024-11-29T12:13:48.044Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:42.533 00:29:42.533 12:13:48 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:29:42.533 12:13:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:42.533 12:13:48 -- dd/basic_rw.sh@23 -- # count=7 00:29:42.533 12:13:48 -- dd/basic_rw.sh@24 -- # count=7 00:29:42.533 12:13:48 -- dd/basic_rw.sh@25 -- # size=57344 00:29:42.533 12:13:48 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:29:42.533 12:13:48 -- dd/common.sh@98 -- # xtrace_disable 00:29:42.533 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:29:43.101 12:13:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:29:43.101 12:13:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:43.101 12:13:48 -- dd/common.sh@31 -- # xtrace_disable 00:29:43.101 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:29:43.358 [2024-11-29 12:13:48.633929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:43.358 [2024-11-29 12:13:48.634148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145007 ] 00:29:43.358 { 00:29:43.358 "subsystems": [ 00:29:43.358 { 00:29:43.358 "subsystem": "bdev", 00:29:43.358 "config": [ 00:29:43.358 { 00:29:43.358 "params": { 00:29:43.358 "trtype": "pcie", 00:29:43.358 "traddr": "0000:00:06.0", 00:29:43.358 "name": "Nvme0" 00:29:43.358 }, 00:29:43.358 "method": "bdev_nvme_attach_controller" 00:29:43.358 }, 00:29:43.358 { 00:29:43.359 "method": "bdev_wait_for_examine" 00:29:43.359 } 00:29:43.359 ] 00:29:43.359 } 00:29:43.359 ] 00:29:43.359 } 00:29:43.359 [2024-11-29 12:13:48.776484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.617 [2024-11-29 12:13:48.874995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.617  [2024-11-29T12:13:49.387Z] Copying: 56/56 [kB] (average 54 MBps) 00:29:43.876 00:29:43.876 12:13:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:29:43.876 12:13:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:43.876 12:13:49 -- dd/common.sh@31 -- # xtrace_disable 00:29:43.876 12:13:49 -- common/autotest_common.sh@10 -- # set +x 00:29:43.876 { 00:29:43.876 "subsystems": [ 00:29:43.876 { 00:29:43.876 "subsystem": "bdev", 00:29:43.876 "config": [ 00:29:43.876 { 00:29:43.876 "params": { 00:29:43.876 "trtype": "pcie", 00:29:43.876 "traddr": "0000:00:06.0", 00:29:43.876 "name": "Nvme0" 00:29:43.876 }, 00:29:43.876 "method": "bdev_nvme_attach_controller" 00:29:43.876 }, 00:29:43.876 { 00:29:43.876 "method": "bdev_wait_for_examine" 00:29:43.876 } 00:29:43.876 ] 00:29:43.876 } 00:29:43.876 ] 00:29:43.876 } 00:29:44.135 [2024-11-29 12:13:49.394995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:44.135 [2024-11-29 12:13:49.395309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145027 ] 00:29:44.135 [2024-11-29 12:13:49.550713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.135 [2024-11-29 12:13:49.647997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.394  [2024-11-29T12:13:50.164Z] Copying: 56/56 [kB] (average 54 MBps) 00:29:44.653 00:29:44.653 12:13:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:44.653 12:13:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:29:44.653 12:13:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:44.653 12:13:50 -- dd/common.sh@11 -- # local nvme_ref= 00:29:44.653 12:13:50 -- dd/common.sh@12 -- # local size=57344 00:29:44.653 12:13:50 -- dd/common.sh@14 -- # local bs=1048576 00:29:44.653 12:13:50 -- dd/common.sh@15 -- # local count=1 00:29:44.653 12:13:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:44.653 12:13:50 -- dd/common.sh@18 -- # gen_conf 00:29:44.653 12:13:50 -- dd/common.sh@31 -- # xtrace_disable 00:29:44.653 12:13:50 -- common/autotest_common.sh@10 -- # set +x 00:29:44.912 [2024-11-29 12:13:50.170007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:44.912 [2024-11-29 12:13:50.170260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145047 ] 00:29:44.912 { 00:29:44.912 "subsystems": [ 00:29:44.912 { 00:29:44.912 "subsystem": "bdev", 00:29:44.912 "config": [ 00:29:44.912 { 00:29:44.912 "params": { 00:29:44.912 "trtype": "pcie", 00:29:44.912 "traddr": "0000:00:06.0", 00:29:44.912 "name": "Nvme0" 00:29:44.912 }, 00:29:44.912 "method": "bdev_nvme_attach_controller" 00:29:44.912 }, 00:29:44.912 { 00:29:44.912 "method": "bdev_wait_for_examine" 00:29:44.912 } 00:29:44.912 ] 00:29:44.912 } 00:29:44.912 ] 00:29:44.912 } 00:29:44.912 [2024-11-29 12:13:50.317460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.912 [2024-11-29 12:13:50.422382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.170  [2024-11-29T12:13:50.938Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:45.427 00:29:45.427 12:13:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:45.427 12:13:50 -- dd/basic_rw.sh@23 -- # count=7 00:29:45.427 12:13:50 -- dd/basic_rw.sh@24 -- # count=7 00:29:45.427 12:13:50 -- dd/basic_rw.sh@25 -- # size=57344 00:29:45.427 12:13:50 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:29:45.427 12:13:50 -- dd/common.sh@98 -- # xtrace_disable 00:29:45.427 12:13:50 -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 12:13:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:29:45.994 12:13:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:45.994 12:13:51 -- dd/common.sh@31 -- # xtrace_disable 00:29:45.994 12:13:51 -- common/autotest_common.sh@10 -- # set +x 00:29:46.252 [2024-11-29 12:13:51.544187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:46.252 [2024-11-29 12:13:51.544416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145067 ] 00:29:46.252 { 00:29:46.252 "subsystems": [ 00:29:46.252 { 00:29:46.252 "subsystem": "bdev", 00:29:46.252 "config": [ 00:29:46.252 { 00:29:46.252 "params": { 00:29:46.252 "trtype": "pcie", 00:29:46.252 "traddr": "0000:00:06.0", 00:29:46.252 "name": "Nvme0" 00:29:46.252 }, 00:29:46.252 "method": "bdev_nvme_attach_controller" 00:29:46.252 }, 00:29:46.252 { 00:29:46.252 "method": "bdev_wait_for_examine" 00:29:46.252 } 00:29:46.252 ] 00:29:46.252 } 00:29:46.252 ] 00:29:46.252 } 00:29:46.252 [2024-11-29 12:13:51.693803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.510 [2024-11-29 12:13:51.795840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.511  [2024-11-29T12:13:52.280Z] Copying: 56/56 [kB] (average 54 MBps) 00:29:46.769 00:29:46.769 12:13:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:29:46.769 12:13:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:46.769 12:13:52 -- dd/common.sh@31 -- # xtrace_disable 00:29:46.769 12:13:52 -- common/autotest_common.sh@10 -- # set +x 00:29:47.028 { 00:29:47.028 "subsystems": [ 00:29:47.028 { 00:29:47.028 "subsystem": "bdev", 00:29:47.028 "config": [ 00:29:47.028 { 00:29:47.028 "params": { 00:29:47.028 "trtype": "pcie", 00:29:47.028 "traddr": "0000:00:06.0", 00:29:47.028 "name": "Nvme0" 00:29:47.028 }, 00:29:47.028 "method": "bdev_nvme_attach_controller" 00:29:47.028 }, 00:29:47.028 { 00:29:47.028 "method": "bdev_wait_for_examine" 00:29:47.028 } 00:29:47.028 ] 00:29:47.028 } 00:29:47.028 ] 00:29:47.028 } 00:29:47.028 [2024-11-29 12:13:52.308246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:47.028 [2024-11-29 12:13:52.308492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145083 ] 00:29:47.028 [2024-11-29 12:13:52.456724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.287 [2024-11-29 12:13:52.553482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.287  [2024-11-29T12:13:53.057Z] Copying: 56/56 [kB] (average 54 MBps) 00:29:47.546 00:29:47.546 12:13:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:47.546 12:13:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:29:47.546 12:13:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:47.546 12:13:53 -- dd/common.sh@11 -- # local nvme_ref= 00:29:47.546 12:13:53 -- dd/common.sh@12 -- # local size=57344 00:29:47.546 12:13:53 -- dd/common.sh@14 -- # local bs=1048576 00:29:47.546 12:13:53 -- dd/common.sh@15 -- # local count=1 00:29:47.546 12:13:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:47.546 12:13:53 -- dd/common.sh@18 -- # gen_conf 00:29:47.546 12:13:53 -- dd/common.sh@31 -- # xtrace_disable 00:29:47.546 12:13:53 -- common/autotest_common.sh@10 -- # set +x 00:29:47.805 [2024-11-29 12:13:53.090853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:47.805 [2024-11-29 12:13:53.091135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145104 ] 00:29:47.805 { 00:29:47.805 "subsystems": [ 00:29:47.805 { 00:29:47.805 "subsystem": "bdev", 00:29:47.805 "config": [ 00:29:47.805 { 00:29:47.805 "params": { 00:29:47.805 "trtype": "pcie", 00:29:47.805 "traddr": "0000:00:06.0", 00:29:47.805 "name": "Nvme0" 00:29:47.805 }, 00:29:47.805 "method": "bdev_nvme_attach_controller" 00:29:47.805 }, 00:29:47.805 { 00:29:47.805 "method": "bdev_wait_for_examine" 00:29:47.805 } 00:29:47.805 ] 00:29:47.805 } 00:29:47.805 ] 00:29:47.805 } 00:29:47.805 [2024-11-29 12:13:53.234654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.063 [2024-11-29 12:13:53.331088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.063  [2024-11-29T12:13:53.832Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:29:48.321 00:29:48.321 12:13:53 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:29:48.321 12:13:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:48.321 12:13:53 -- dd/basic_rw.sh@23 -- # count=3 00:29:48.321 12:13:53 -- dd/basic_rw.sh@24 -- # count=3 00:29:48.321 12:13:53 -- dd/basic_rw.sh@25 -- # size=49152 00:29:48.321 12:13:53 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:29:48.321 12:13:53 -- dd/common.sh@98 -- # xtrace_disable 00:29:48.321 12:13:53 -- common/autotest_common.sh@10 -- # set +x 00:29:48.888 12:13:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:29:48.888 12:13:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:48.888 12:13:54 -- dd/common.sh@31 -- # xtrace_disable 00:29:48.888 12:13:54 -- common/autotest_common.sh@10 -- # set +x 00:29:48.888 { 00:29:48.888 "subsystems": [ 00:29:48.888 { 00:29:48.888 "subsystem": "bdev", 00:29:48.888 "config": [ 00:29:48.888 { 00:29:48.888 "params": { 00:29:48.888 "trtype": "pcie", 00:29:48.888 "traddr": "0000:00:06.0", 00:29:48.888 "name": "Nvme0" 00:29:48.888 }, 00:29:48.888 "method": "bdev_nvme_attach_controller" 00:29:48.888 }, 00:29:48.888 { 00:29:48.888 "method": "bdev_wait_for_examine" 00:29:48.888 } 00:29:48.888 ] 00:29:48.888 } 00:29:48.888 ] 00:29:48.888 } 00:29:48.888 [2024-11-29 12:13:54.388541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:48.888 [2024-11-29 12:13:54.388796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145124 ] 00:29:49.147 [2024-11-29 12:13:54.536437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.147 [2024-11-29 12:13:54.639997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.406  [2024-11-29T12:13:55.175Z] Copying: 48/48 [kB] (average 46 MBps) 00:29:49.664 00:29:49.664 12:13:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:29:49.664 12:13:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:49.664 12:13:55 -- dd/common.sh@31 -- # xtrace_disable 00:29:49.664 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:29:49.664 [2024-11-29 12:13:55.177676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:49.664 [2024-11-29 12:13:55.177871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145143 ] 00:29:49.922 { 00:29:49.922 "subsystems": [ 00:29:49.922 { 00:29:49.922 "subsystem": "bdev", 00:29:49.922 "config": [ 00:29:49.922 { 00:29:49.922 "params": { 00:29:49.922 "trtype": "pcie", 00:29:49.922 "traddr": "0000:00:06.0", 00:29:49.922 "name": "Nvme0" 00:29:49.922 }, 00:29:49.922 "method": "bdev_nvme_attach_controller" 00:29:49.922 }, 00:29:49.922 { 00:29:49.922 "method": "bdev_wait_for_examine" 00:29:49.922 } 00:29:49.922 ] 00:29:49.922 } 00:29:49.922 ] 00:29:49.922 } 00:29:49.922 [2024-11-29 12:13:55.318981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.922 [2024-11-29 12:13:55.417173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.179  [2024-11-29T12:13:55.948Z] Copying: 48/48 [kB] (average 46 MBps) 00:29:50.437 00:29:50.437 12:13:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:50.437 12:13:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:29:50.437 12:13:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:50.437 12:13:55 -- dd/common.sh@11 -- # local nvme_ref= 00:29:50.437 12:13:55 -- dd/common.sh@12 -- # local size=49152 00:29:50.437 12:13:55 -- dd/common.sh@14 -- # local bs=1048576 00:29:50.437 12:13:55 -- dd/common.sh@15 -- # local count=1 00:29:50.437 12:13:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:50.437 12:13:55 -- dd/common.sh@18 -- # gen_conf 00:29:50.437 12:13:55 -- dd/common.sh@31 -- # xtrace_disable 00:29:50.437 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:29:50.437 { 00:29:50.437 "subsystems": [ 00:29:50.437 { 00:29:50.437 "subsystem": "bdev", 00:29:50.437 "config": [ 00:29:50.437 { 00:29:50.437 "params": { 00:29:50.437 "trtype": "pcie", 00:29:50.437 "traddr": "0000:00:06.0", 00:29:50.437 "name": "Nvme0" 00:29:50.437 }, 00:29:50.437 "method": "bdev_nvme_attach_controller" 00:29:50.437 }, 00:29:50.437 { 00:29:50.437 "method": "bdev_wait_for_examine" 00:29:50.437 } 00:29:50.437 ] 00:29:50.437 } 00:29:50.437 ] 00:29:50.437 } 00:29:50.694 [2024-11-29 12:13:55.951797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:50.694 [2024-11-29 12:13:55.952337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145153 ] 00:29:50.694 [2024-11-29 12:13:56.103123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.694 [2024-11-29 12:13:56.198693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.978  [2024-11-29T12:13:56.746Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:51.235 00:29:51.235 12:13:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:51.235 12:13:56 -- dd/basic_rw.sh@23 -- # count=3 00:29:51.235 12:13:56 -- dd/basic_rw.sh@24 -- # count=3 00:29:51.235 12:13:56 -- dd/basic_rw.sh@25 -- # size=49152 00:29:51.235 12:13:56 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:29:51.235 12:13:56 -- dd/common.sh@98 -- # xtrace_disable 00:29:51.235 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:29:51.800 12:13:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:29:51.800 12:13:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:51.800 12:13:57 -- dd/common.sh@31 -- # xtrace_disable 00:29:51.800 12:13:57 -- common/autotest_common.sh@10 -- # set +x 00:29:51.800 [2024-11-29 12:13:57.182069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:51.800 [2024-11-29 12:13:57.182321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145182 ] 00:29:51.800 { 00:29:51.800 "subsystems": [ 00:29:51.800 { 00:29:51.800 "subsystem": "bdev", 00:29:51.800 "config": [ 00:29:51.800 { 00:29:51.800 "params": { 00:29:51.800 "trtype": "pcie", 00:29:51.800 "traddr": "0000:00:06.0", 00:29:51.800 "name": "Nvme0" 00:29:51.800 }, 00:29:51.800 "method": "bdev_nvme_attach_controller" 00:29:51.800 }, 00:29:51.800 { 00:29:51.800 "method": "bdev_wait_for_examine" 00:29:51.800 } 00:29:51.800 ] 00:29:51.800 } 00:29:51.800 ] 00:29:51.800 } 00:29:52.058 [2024-11-29 12:13:57.332724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.058 [2024-11-29 12:13:57.434061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.315  [2024-11-29T12:13:58.084Z] Copying: 48/48 [kB] (average 46 MBps) 00:29:52.573 00:29:52.573 12:13:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:29:52.573 12:13:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:52.573 12:13:57 -- dd/common.sh@31 -- # xtrace_disable 00:29:52.573 12:13:57 -- common/autotest_common.sh@10 -- # set +x 00:29:52.573 [2024-11-29 12:13:57.956587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:52.573 [2024-11-29 12:13:57.956899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145195 ] 00:29:52.573 { 00:29:52.573 "subsystems": [ 00:29:52.573 { 00:29:52.573 "subsystem": "bdev", 00:29:52.573 "config": [ 00:29:52.573 { 00:29:52.573 "params": { 00:29:52.573 "trtype": "pcie", 00:29:52.573 "traddr": "0000:00:06.0", 00:29:52.573 "name": "Nvme0" 00:29:52.573 }, 00:29:52.573 "method": "bdev_nvme_attach_controller" 00:29:52.573 }, 00:29:52.573 { 00:29:52.573 "method": "bdev_wait_for_examine" 00:29:52.573 } 00:29:52.573 ] 00:29:52.573 } 00:29:52.573 ] 00:29:52.573 } 00:29:52.829 [2024-11-29 12:13:58.106309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.829 [2024-11-29 12:13:58.201810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.086  [2024-11-29T12:13:58.856Z] Copying: 48/48 [kB] (average 46 MBps) 00:29:53.345 00:29:53.345 12:13:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:53.345 12:13:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:29:53.345 12:13:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:53.345 12:13:58 -- dd/common.sh@11 -- # local nvme_ref= 00:29:53.345 12:13:58 -- dd/common.sh@12 -- # local size=49152 00:29:53.345 12:13:58 -- dd/common.sh@14 -- # local bs=1048576 00:29:53.345 12:13:58 -- dd/common.sh@15 -- # local count=1 00:29:53.345 12:13:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:53.345 12:13:58 -- dd/common.sh@18 -- # gen_conf 00:29:53.345 12:13:58 -- dd/common.sh@31 -- # xtrace_disable 00:29:53.345 12:13:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.345 { 00:29:53.345 "subsystems": [ 00:29:53.345 { 00:29:53.345 "subsystem": "bdev", 00:29:53.345 "config": [ 00:29:53.345 { 00:29:53.345 "params": { 00:29:53.345 "trtype": "pcie", 00:29:53.345 "traddr": "0000:00:06.0", 00:29:53.345 "name": "Nvme0" 00:29:53.345 }, 00:29:53.345 "method": "bdev_nvme_attach_controller" 00:29:53.345 }, 00:29:53.345 { 00:29:53.345 "method": "bdev_wait_for_examine" 00:29:53.345 } 00:29:53.345 ] 00:29:53.345 } 00:29:53.345 ] 00:29:53.345 } 00:29:53.345 [2024-11-29 12:13:58.723936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:53.345 [2024-11-29 12:13:58.724463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145211 ] 00:29:53.603 [2024-11-29 12:13:58.882202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.603 [2024-11-29 12:13:58.978270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.861  [2024-11-29T12:13:59.632Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:29:54.121 00:29:54.121 ************************************ 00:29:54.121 END TEST dd_rw 00:29:54.121 ************************************ 00:29:54.121 00:29:54.121 real 0m17.472s 00:29:54.121 user 0m11.939s 00:29:54.121 sys 0m4.112s 00:29:54.121 12:13:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:54.121 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.121 12:13:59 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:29:54.121 12:13:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:54.121 12:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:54.121 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.121 ************************************ 00:29:54.121 START TEST dd_rw_offset 00:29:54.121 ************************************ 00:29:54.121 12:13:59 -- common/autotest_common.sh@1114 -- # basic_offset 00:29:54.121 12:13:59 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:29:54.121 12:13:59 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:29:54.121 12:13:59 -- dd/common.sh@98 -- # xtrace_disable 00:29:54.121 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.121 12:13:59 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:29:54.121 12:13:59 -- dd/basic_rw.sh@56 -- # data=3iklq8pcy83hg2jfrhxul9dppslylhzj7mtnbra6lzgjdm1quvpm76k0swaenghd7eaxxep6qg7urrhyyl8bkddmwvjnu1nbor5noxpbguoe5092gx28ycxk3p5j34vm5l1sa84sjo0q97077zqflbv6iegpsa1zx70xrrb9v5xocswv3fgpatrvx5oztimbzpq2sp9zex1y9ac1rgax95w0hvmrw6negslf0dp3unhu9hlaekderiv03ljxkqe12ugwkdss9yfbhzjlvggdyqqoo0fuhny23d99pvtkzdszm221uday3kcqxon21qrs6kmo6d962zofw1o2lkwmqncg1i1h437kdnpu22plxsblyboqsuwru8nr2fhzaiwutqyxre6reeizy5vs9dkt4beif7n6k6wk7eqm5x24tou8s2s7u25qwviiobknx34l527kbx48ovabkd5dinkcnpe543lekv5pe3j2clpyrz1hlc78vgzmqftksz19iy57wtphitd5ofalw0659qkzckgoqc7hivp65z2tudhxkrlkzcwn039ym85eitv1hq8jio1xs2dpi8jfsnfe8ksceose1ews653kkpvfqicpdja6gv3ou7d3erx8s52z1awi9vsgolxy9mx1sd62j83gq0lmkx7rsxbksz4axrglse4ldvdnndiy2mekkm4riqqaoum39p68wqycbas450jo5d8q45glaj3si77vfrzslc4t3sm9dzcvg92ktunbk0kvxr1zl8ic1cztx9wdceb2mpdmb69h9qhtf0212o5jbsabx2lhu7jtxy5ks4lluhuvcvwpc0c92w69jzu1lk07sjnimz03qbfrw9qlmz6h8fd7gurnypa36jkmp6rbn91gjmcaai2dpd0asyvkfsakuag66ddk5g7vj1ezbc8vxlfismbfpj34gt8813uaer0i24vffieasyldlt0erfiun8omat83rj1acqn5l15goseili1viyu1yarxtxhfrfheqq7nsscsumxl5ba4egned0krs2r0ki24bijg4b3rluixc7ctw74z9d70pzbldwagvlkbyma86oasohrdivqid6zef9qnht9f8u9j0j685n70ypb4pk8bgngqi8o74a4177nylj04ccnzk0jzywcbd0ox7chf8vcwactwgc887cpvz2t45yhxonmi47gfbpfwbbpqncijj601iouvric9v8ndpfbm4ndvtd6cvkhd5zlu4lyoi2eukidbx2lwg936550lvg7ihwptq1c5dqglv6jevc9gscs4zxb7buf8ewpu135yiuiber54ui6x1ochshiqz9nvpzr0xrdtbimj0aqb65gulj8i2kvqqdn0227d3g0dibxxmpr9ubhz4tuk7jqjku5vf8gmqri862uqi9vefwb3fqt2sfnomsa45b867fs8pbrh1bgz3yxhi7w48o0s7a82jv68lm51uot2mmr67n1il9pjz74zbq1izk0rrlgtzt7va3v8uklg12crft3yufe5bcw4476tlq1aghe7bir5o1q19c3m86gd31o7wymbhnsw6lly7k1ugydp486bx9barq39jcp57di8euzmmr3me0uzhbzvu9vb9exdeh6kqmiuo07jw2dbplge89jt7g7un3vr89h47y83plnjg3trswa936v0ewu9hhdi8k6x8aov6vpw3ymkta9he0pktvhawn51db9gwujvsf61lq0o19pc40whtqt1vdhs396uh7wy8fq8ccr0mfyvchoqr4176sei640uanc9eo7iulm4p3qvt6hwo2dhx7pvlm5yjlybimju5kyaqaaql4impjx905m156m37eg3kpxyft2v6t4yfq0og2nf8nepefni1tjjlhdy1c8rrmcvo1tb8x7s6e514bsdlrohc4u0qdp2853l4hs48aehrha01gfsppa64nxs3iod736eqtzo4rc6h41kp6xm7lzo4rb288hn2u88d1crpgbxg8x4f6m1n8n342cnbx5x5w1ne0yx484i1j99uiz63f1kldaomdeajsxto6da4kmebgyu850nbc5ca9km0ad8zax9l7h2i9y5v0mynpdw9zu6nt4omkcdkmwy0q1ssq95xqodl4q52p666679lxej66d36zhzb41n20sb0k1hpqd600ek8pxiv48bkmb7e69hva8g098i6mli2w42mtujsud4qgujbt4wyicrzixta2t850p5tkukkaoolyje52wzmnswj76whgxs71kbkhdki4afleifjl2nm0e1gozts5wx1ziz1r6gu0khs0kphfmt3bltgj9sr1hbh90k9xmy890lmli6v9y2jlai9xjnmybz39ufjuy9xnpfctnjkaj81xx1cvk7s09fwlxl4llgv3yh2h5no9xvbi72der3o3jlf8sw4d2ornt1nwxmduq7rzgz6axkd29cpmme027kczae231kiokitjqqy3vwyd8gnchrb87cgphgqofegapcrqxg9mx917ctbfnqzdxw1p87xnxmzdw6f5c530pdinh8r2dmrxj0wjmn9018nalmqhvm0lkdss9h3b7sz9w2ppktgfuz5fgjxpoytgqx5i2v1a4e2wxhh9u6p5cwofu6zyofs3tg95y3uq7w3w7nxkhif3jgpci6e9cr8ude0eey8dg633rmdsbln2j5qtbls628nwj5nqj370j6f6i23rlu7i6vd4gh517yz2krefjfi2sd3mla207dcfdz57k9dkvqhfgu1e1bnwvmpvdcic8qv0aflrx08l13hscjkyk5i9pzyz8tv2bnd0shptpzdvphwkalc6qoh6mt4oe8flzsuh6kxcw8a7d9wz6zo64p76vllin4hyihll3qctwh084wf3q3b2zlirg7vyu4qri9lqdzlbcjddtel0uc28u4jhno4wspmxbvwlsnqc79i20mle052mzo5dc1jx10z2lc74ycxu01jrxtt427p32m8skxcvkcqjxbcksmaaxinxc1v0wm8028x6mutz30tsf2xb4idj73dd9bkjw7fmszf1mejsr16uf53un9cunj5t0gawbsq7i7wrcuogje72xhx55wps5253f4whdfy3xj4w34rr1lnnh78yl73ikhzje2m7311t1cxolu7mshkd9594doo00vc8snypg9k6qhbx6lj1dynx2xzxdkxicgo2gl1onbtzgyhapi2jbzd1zubmx51xjs1km1yakcujke8ce04z1rzl9mk4wnuwohqmw0snms7sqwyv0taybuhkknn8yymoqwt654qrh5bgheqfjv3v51c5if9ca05mgvnwfb0g6lbenarfru6si9rfae0g4ox1tekasbx43vkhmqc0ogwx07kbkctx5psghgztc8pp1xtet5u56dosnqzfcl945tzb4pv30rt6ld0eamri44f0julxmrfp76shannc98k6aqrbvizjgq5yjyelt2oticogb9j04hfpgpzbs759upv0xjx6q2wx8ow6jddsx457ws49s0w36cfxaxuw0kev9s095ed26erwg1fdt39jkd8hyv7exczhreolyxqwi61yewivhwhmrf8ju4ki4o572c3waut008lstm30pgval2lyct7qppaero9rjkrvui5tiiqq7c6rh8bd8rq49anr6zcxkqh4mbszcx2rhfl87grrhb6fzfkraavkgc0ajvx2wxi5hqbmooak5i87dy8cme7ax94wt4ef6orltlf7rh4yibyrm7fkmltxw1mhrkahuc95u5ln82tncl4ng0egx7uah00nz62copm0elm0ey7pbmioza1khqa2k7lln77b9o8zzqqfj3vvu9bwd3n0mxayb4p639iy2j6ygtv2cffnxiavcndx4b3rn9lsj3niwclce9bhrqzbkiykkwvyx4tkgj79etczic0up0895yh3gcyxyu50zbzpzrpb941a0ijwn8453qjmjzt30cocecimms48pygiseizdrgh5pwmnbdfa4feqca2ojnrx1q5nhczmlxyisd568jq9z 00:29:54.121 12:13:59 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:29:54.121 12:13:59 -- dd/basic_rw.sh@59 -- # gen_conf 00:29:54.121 12:13:59 -- dd/common.sh@31 -- # xtrace_disable 00:29:54.121 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.121 [2024-11-29 12:13:59.600663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:54.121 [2024-11-29 12:13:59.600910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145251 ] 00:29:54.121 { 00:29:54.121 "subsystems": [ 00:29:54.121 { 00:29:54.121 "subsystem": "bdev", 00:29:54.121 "config": [ 00:29:54.121 { 00:29:54.121 "params": { 00:29:54.121 "trtype": "pcie", 00:29:54.121 "traddr": "0000:00:06.0", 00:29:54.121 "name": "Nvme0" 00:29:54.121 }, 00:29:54.121 "method": "bdev_nvme_attach_controller" 00:29:54.121 }, 00:29:54.121 { 00:29:54.121 "method": "bdev_wait_for_examine" 00:29:54.121 } 00:29:54.121 ] 00:29:54.121 } 00:29:54.121 ] 00:29:54.121 } 00:29:54.380 [2024-11-29 12:13:59.743350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.380 [2024-11-29 12:13:59.840347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.639  [2024-11-29T12:14:00.408Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:29:54.897 00:29:54.897 12:14:00 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:29:54.897 12:14:00 -- dd/basic_rw.sh@65 -- # gen_conf 00:29:54.897 12:14:00 -- dd/common.sh@31 -- # xtrace_disable 00:29:54.897 12:14:00 -- common/autotest_common.sh@10 -- # set +x 00:29:54.897 [2024-11-29 12:14:00.354731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:54.897 [2024-11-29 12:14:00.354982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145274 ] 00:29:54.897 { 00:29:54.897 "subsystems": [ 00:29:54.897 { 00:29:54.897 "subsystem": "bdev", 00:29:54.897 "config": [ 00:29:54.897 { 00:29:54.897 "params": { 00:29:54.897 "trtype": "pcie", 00:29:54.897 "traddr": "0000:00:06.0", 00:29:54.897 "name": "Nvme0" 00:29:54.897 }, 00:29:54.897 "method": "bdev_nvme_attach_controller" 00:29:54.897 }, 00:29:54.897 { 00:29:54.897 "method": "bdev_wait_for_examine" 00:29:54.897 } 00:29:54.897 ] 00:29:54.897 } 00:29:54.897 ] 00:29:54.898 } 00:29:55.156 [2024-11-29 12:14:00.492612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.156 [2024-11-29 12:14:00.588669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.413  [2024-11-29T12:14:01.182Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:29:55.671 00:29:55.671 12:14:01 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:29:55.672 12:14:01 -- dd/basic_rw.sh@72 -- # [[ 3iklq8pcy83hg2jfrhxul9dppslylhzj7mtnbra6lzgjdm1quvpm76k0swaenghd7eaxxep6qg7urrhyyl8bkddmwvjnu1nbor5noxpbguoe5092gx28ycxk3p5j34vm5l1sa84sjo0q97077zqflbv6iegpsa1zx70xrrb9v5xocswv3fgpatrvx5oztimbzpq2sp9zex1y9ac1rgax95w0hvmrw6negslf0dp3unhu9hlaekderiv03ljxkqe12ugwkdss9yfbhzjlvggdyqqoo0fuhny23d99pvtkzdszm221uday3kcqxon21qrs6kmo6d962zofw1o2lkwmqncg1i1h437kdnpu22plxsblyboqsuwru8nr2fhzaiwutqyxre6reeizy5vs9dkt4beif7n6k6wk7eqm5x24tou8s2s7u25qwviiobknx34l527kbx48ovabkd5dinkcnpe543lekv5pe3j2clpyrz1hlc78vgzmqftksz19iy57wtphitd5ofalw0659qkzckgoqc7hivp65z2tudhxkrlkzcwn039ym85eitv1hq8jio1xs2dpi8jfsnfe8ksceose1ews653kkpvfqicpdja6gv3ou7d3erx8s52z1awi9vsgolxy9mx1sd62j83gq0lmkx7rsxbksz4axrglse4ldvdnndiy2mekkm4riqqaoum39p68wqycbas450jo5d8q45glaj3si77vfrzslc4t3sm9dzcvg92ktunbk0kvxr1zl8ic1cztx9wdceb2mpdmb69h9qhtf0212o5jbsabx2lhu7jtxy5ks4lluhuvcvwpc0c92w69jzu1lk07sjnimz03qbfrw9qlmz6h8fd7gurnypa36jkmp6rbn91gjmcaai2dpd0asyvkfsakuag66ddk5g7vj1ezbc8vxlfismbfpj34gt8813uaer0i24vffieasyldlt0erfiun8omat83rj1acqn5l15goseili1viyu1yarxtxhfrfheqq7nsscsumxl5ba4egned0krs2r0ki24bijg4b3rluixc7ctw74z9d70pzbldwagvlkbyma86oasohrdivqid6zef9qnht9f8u9j0j685n70ypb4pk8bgngqi8o74a4177nylj04ccnzk0jzywcbd0ox7chf8vcwactwgc887cpvz2t45yhxonmi47gfbpfwbbpqncijj601iouvric9v8ndpfbm4ndvtd6cvkhd5zlu4lyoi2eukidbx2lwg936550lvg7ihwptq1c5dqglv6jevc9gscs4zxb7buf8ewpu135yiuiber54ui6x1ochshiqz9nvpzr0xrdtbimj0aqb65gulj8i2kvqqdn0227d3g0dibxxmpr9ubhz4tuk7jqjku5vf8gmqri862uqi9vefwb3fqt2sfnomsa45b867fs8pbrh1bgz3yxhi7w48o0s7a82jv68lm51uot2mmr67n1il9pjz74zbq1izk0rrlgtzt7va3v8uklg12crft3yufe5bcw4476tlq1aghe7bir5o1q19c3m86gd31o7wymbhnsw6lly7k1ugydp486bx9barq39jcp57di8euzmmr3me0uzhbzvu9vb9exdeh6kqmiuo07jw2dbplge89jt7g7un3vr89h47y83plnjg3trswa936v0ewu9hhdi8k6x8aov6vpw3ymkta9he0pktvhawn51db9gwujvsf61lq0o19pc40whtqt1vdhs396uh7wy8fq8ccr0mfyvchoqr4176sei640uanc9eo7iulm4p3qvt6hwo2dhx7pvlm5yjlybimju5kyaqaaql4impjx905m156m37eg3kpxyft2v6t4yfq0og2nf8nepefni1tjjlhdy1c8rrmcvo1tb8x7s6e514bsdlrohc4u0qdp2853l4hs48aehrha01gfsppa64nxs3iod736eqtzo4rc6h41kp6xm7lzo4rb288hn2u88d1crpgbxg8x4f6m1n8n342cnbx5x5w1ne0yx484i1j99uiz63f1kldaomdeajsxto6da4kmebgyu850nbc5ca9km0ad8zax9l7h2i9y5v0mynpdw9zu6nt4omkcdkmwy0q1ssq95xqodl4q52p666679lxej66d36zhzb41n20sb0k1hpqd600ek8pxiv48bkmb7e69hva8g098i6mli2w42mtujsud4qgujbt4wyicrzixta2t850p5tkukkaoolyje52wzmnswj76whgxs71kbkhdki4afleifjl2nm0e1gozts5wx1ziz1r6gu0khs0kphfmt3bltgj9sr1hbh90k9xmy890lmli6v9y2jlai9xjnmybz39ufjuy9xnpfctnjkaj81xx1cvk7s09fwlxl4llgv3yh2h5no9xvbi72der3o3jlf8sw4d2ornt1nwxmduq7rzgz6axkd29cpmme027kczae231kiokitjqqy3vwyd8gnchrb87cgphgqofegapcrqxg9mx917ctbfnqzdxw1p87xnxmzdw6f5c530pdinh8r2dmrxj0wjmn9018nalmqhvm0lkdss9h3b7sz9w2ppktgfuz5fgjxpoytgqx5i2v1a4e2wxhh9u6p5cwofu6zyofs3tg95y3uq7w3w7nxkhif3jgpci6e9cr8ude0eey8dg633rmdsbln2j5qtbls628nwj5nqj370j6f6i23rlu7i6vd4gh517yz2krefjfi2sd3mla207dcfdz57k9dkvqhfgu1e1bnwvmpvdcic8qv0aflrx08l13hscjkyk5i9pzyz8tv2bnd0shptpzdvphwkalc6qoh6mt4oe8flzsuh6kxcw8a7d9wz6zo64p76vllin4hyihll3qctwh084wf3q3b2zlirg7vyu4qri9lqdzlbcjddtel0uc28u4jhno4wspmxbvwlsnqc79i20mle052mzo5dc1jx10z2lc74ycxu01jrxtt427p32m8skxcvkcqjxbcksmaaxinxc1v0wm8028x6mutz30tsf2xb4idj73dd9bkjw7fmszf1mejsr16uf53un9cunj5t0gawbsq7i7wrcuogje72xhx55wps5253f4whdfy3xj4w34rr1lnnh78yl73ikhzje2m7311t1cxolu7mshkd9594doo00vc8snypg9k6qhbx6lj1dynx2xzxdkxicgo2gl1onbtzgyhapi2jbzd1zubmx51xjs1km1yakcujke8ce04z1rzl9mk4wnuwohqmw0snms7sqwyv0taybuhkknn8yymoqwt654qrh5bgheqfjv3v51c5if9ca05mgvnwfb0g6lbenarfru6si9rfae0g4ox1tekasbx43vkhmqc0ogwx07kbkctx5psghgztc8pp1xtet5u56dosnqzfcl945tzb4pv30rt6ld0eamri44f0julxmrfp76shannc98k6aqrbvizjgq5yjyelt2oticogb9j04hfpgpzbs759upv0xjx6q2wx8ow6jddsx457ws49s0w36cfxaxuw0kev9s095ed26erwg1fdt39jkd8hyv7exczhreolyxqwi61yewivhwhmrf8ju4ki4o572c3waut008lstm30pgval2lyct7qppaero9rjkrvui5tiiqq7c6rh8bd8rq49anr6zcxkqh4mbszcx2rhfl87grrhb6fzfkraavkgc0ajvx2wxi5hqbmooak5i87dy8cme7ax94wt4ef6orltlf7rh4yibyrm7fkmltxw1mhrkahuc95u5ln82tncl4ng0egx7uah00nz62copm0elm0ey7pbmioza1khqa2k7lln77b9o8zzqqfj3vvu9bwd3n0mxayb4p639iy2j6ygtv2cffnxiavcndx4b3rn9lsj3niwclce9bhrqzbkiykkwvyx4tkgj79etczic0up0895yh3gcyxyu50zbzpzrpb941a0ijwn8453qjmjzt30cocecimms48pygiseizdrgh5pwmnbdfa4feqca2ojnrx1q5nhczmlxyisd568jq9z == \3\i\k\l\q\8\p\c\y\8\3\h\g\2\j\f\r\h\x\u\l\9\d\p\p\s\l\y\l\h\z\j\7\m\t\n\b\r\a\6\l\z\g\j\d\m\1\q\u\v\p\m\7\6\k\0\s\w\a\e\n\g\h\d\7\e\a\x\x\e\p\6\q\g\7\u\r\r\h\y\y\l\8\b\k\d\d\m\w\v\j\n\u\1\n\b\o\r\5\n\o\x\p\b\g\u\o\e\5\0\9\2\g\x\2\8\y\c\x\k\3\p\5\j\3\4\v\m\5\l\1\s\a\8\4\s\j\o\0\q\9\7\0\7\7\z\q\f\l\b\v\6\i\e\g\p\s\a\1\z\x\7\0\x\r\r\b\9\v\5\x\o\c\s\w\v\3\f\g\p\a\t\r\v\x\5\o\z\t\i\m\b\z\p\q\2\s\p\9\z\e\x\1\y\9\a\c\1\r\g\a\x\9\5\w\0\h\v\m\r\w\6\n\e\g\s\l\f\0\d\p\3\u\n\h\u\9\h\l\a\e\k\d\e\r\i\v\0\3\l\j\x\k\q\e\1\2\u\g\w\k\d\s\s\9\y\f\b\h\z\j\l\v\g\g\d\y\q\q\o\o\0\f\u\h\n\y\2\3\d\9\9\p\v\t\k\z\d\s\z\m\2\2\1\u\d\a\y\3\k\c\q\x\o\n\2\1\q\r\s\6\k\m\o\6\d\9\6\2\z\o\f\w\1\o\2\l\k\w\m\q\n\c\g\1\i\1\h\4\3\7\k\d\n\p\u\2\2\p\l\x\s\b\l\y\b\o\q\s\u\w\r\u\8\n\r\2\f\h\z\a\i\w\u\t\q\y\x\r\e\6\r\e\e\i\z\y\5\v\s\9\d\k\t\4\b\e\i\f\7\n\6\k\6\w\k\7\e\q\m\5\x\2\4\t\o\u\8\s\2\s\7\u\2\5\q\w\v\i\i\o\b\k\n\x\3\4\l\5\2\7\k\b\x\4\8\o\v\a\b\k\d\5\d\i\n\k\c\n\p\e\5\4\3\l\e\k\v\5\p\e\3\j\2\c\l\p\y\r\z\1\h\l\c\7\8\v\g\z\m\q\f\t\k\s\z\1\9\i\y\5\7\w\t\p\h\i\t\d\5\o\f\a\l\w\0\6\5\9\q\k\z\c\k\g\o\q\c\7\h\i\v\p\6\5\z\2\t\u\d\h\x\k\r\l\k\z\c\w\n\0\3\9\y\m\8\5\e\i\t\v\1\h\q\8\j\i\o\1\x\s\2\d\p\i\8\j\f\s\n\f\e\8\k\s\c\e\o\s\e\1\e\w\s\6\5\3\k\k\p\v\f\q\i\c\p\d\j\a\6\g\v\3\o\u\7\d\3\e\r\x\8\s\5\2\z\1\a\w\i\9\v\s\g\o\l\x\y\9\m\x\1\s\d\6\2\j\8\3\g\q\0\l\m\k\x\7\r\s\x\b\k\s\z\4\a\x\r\g\l\s\e\4\l\d\v\d\n\n\d\i\y\2\m\e\k\k\m\4\r\i\q\q\a\o\u\m\3\9\p\6\8\w\q\y\c\b\a\s\4\5\0\j\o\5\d\8\q\4\5\g\l\a\j\3\s\i\7\7\v\f\r\z\s\l\c\4\t\3\s\m\9\d\z\c\v\g\9\2\k\t\u\n\b\k\0\k\v\x\r\1\z\l\8\i\c\1\c\z\t\x\9\w\d\c\e\b\2\m\p\d\m\b\6\9\h\9\q\h\t\f\0\2\1\2\o\5\j\b\s\a\b\x\2\l\h\u\7\j\t\x\y\5\k\s\4\l\l\u\h\u\v\c\v\w\p\c\0\c\9\2\w\6\9\j\z\u\1\l\k\0\7\s\j\n\i\m\z\0\3\q\b\f\r\w\9\q\l\m\z\6\h\8\f\d\7\g\u\r\n\y\p\a\3\6\j\k\m\p\6\r\b\n\9\1\g\j\m\c\a\a\i\2\d\p\d\0\a\s\y\v\k\f\s\a\k\u\a\g\6\6\d\d\k\5\g\7\v\j\1\e\z\b\c\8\v\x\l\f\i\s\m\b\f\p\j\3\4\g\t\8\8\1\3\u\a\e\r\0\i\2\4\v\f\f\i\e\a\s\y\l\d\l\t\0\e\r\f\i\u\n\8\o\m\a\t\8\3\r\j\1\a\c\q\n\5\l\1\5\g\o\s\e\i\l\i\1\v\i\y\u\1\y\a\r\x\t\x\h\f\r\f\h\e\q\q\7\n\s\s\c\s\u\m\x\l\5\b\a\4\e\g\n\e\d\0\k\r\s\2\r\0\k\i\2\4\b\i\j\g\4\b\3\r\l\u\i\x\c\7\c\t\w\7\4\z\9\d\7\0\p\z\b\l\d\w\a\g\v\l\k\b\y\m\a\8\6\o\a\s\o\h\r\d\i\v\q\i\d\6\z\e\f\9\q\n\h\t\9\f\8\u\9\j\0\j\6\8\5\n\7\0\y\p\b\4\p\k\8\b\g\n\g\q\i\8\o\7\4\a\4\1\7\7\n\y\l\j\0\4\c\c\n\z\k\0\j\z\y\w\c\b\d\0\o\x\7\c\h\f\8\v\c\w\a\c\t\w\g\c\8\8\7\c\p\v\z\2\t\4\5\y\h\x\o\n\m\i\4\7\g\f\b\p\f\w\b\b\p\q\n\c\i\j\j\6\0\1\i\o\u\v\r\i\c\9\v\8\n\d\p\f\b\m\4\n\d\v\t\d\6\c\v\k\h\d\5\z\l\u\4\l\y\o\i\2\e\u\k\i\d\b\x\2\l\w\g\9\3\6\5\5\0\l\v\g\7\i\h\w\p\t\q\1\c\5\d\q\g\l\v\6\j\e\v\c\9\g\s\c\s\4\z\x\b\7\b\u\f\8\e\w\p\u\1\3\5\y\i\u\i\b\e\r\5\4\u\i\6\x\1\o\c\h\s\h\i\q\z\9\n\v\p\z\r\0\x\r\d\t\b\i\m\j\0\a\q\b\6\5\g\u\l\j\8\i\2\k\v\q\q\d\n\0\2\2\7\d\3\g\0\d\i\b\x\x\m\p\r\9\u\b\h\z\4\t\u\k\7\j\q\j\k\u\5\v\f\8\g\m\q\r\i\8\6\2\u\q\i\9\v\e\f\w\b\3\f\q\t\2\s\f\n\o\m\s\a\4\5\b\8\6\7\f\s\8\p\b\r\h\1\b\g\z\3\y\x\h\i\7\w\4\8\o\0\s\7\a\8\2\j\v\6\8\l\m\5\1\u\o\t\2\m\m\r\6\7\n\1\i\l\9\p\j\z\7\4\z\b\q\1\i\z\k\0\r\r\l\g\t\z\t\7\v\a\3\v\8\u\k\l\g\1\2\c\r\f\t\3\y\u\f\e\5\b\c\w\4\4\7\6\t\l\q\1\a\g\h\e\7\b\i\r\5\o\1\q\1\9\c\3\m\8\6\g\d\3\1\o\7\w\y\m\b\h\n\s\w\6\l\l\y\7\k\1\u\g\y\d\p\4\8\6\b\x\9\b\a\r\q\3\9\j\c\p\5\7\d\i\8\e\u\z\m\m\r\3\m\e\0\u\z\h\b\z\v\u\9\v\b\9\e\x\d\e\h\6\k\q\m\i\u\o\0\7\j\w\2\d\b\p\l\g\e\8\9\j\t\7\g\7\u\n\3\v\r\8\9\h\4\7\y\8\3\p\l\n\j\g\3\t\r\s\w\a\9\3\6\v\0\e\w\u\9\h\h\d\i\8\k\6\x\8\a\o\v\6\v\p\w\3\y\m\k\t\a\9\h\e\0\p\k\t\v\h\a\w\n\5\1\d\b\9\g\w\u\j\v\s\f\6\1\l\q\0\o\1\9\p\c\4\0\w\h\t\q\t\1\v\d\h\s\3\9\6\u\h\7\w\y\8\f\q\8\c\c\r\0\m\f\y\v\c\h\o\q\r\4\1\7\6\s\e\i\6\4\0\u\a\n\c\9\e\o\7\i\u\l\m\4\p\3\q\v\t\6\h\w\o\2\d\h\x\7\p\v\l\m\5\y\j\l\y\b\i\m\j\u\5\k\y\a\q\a\a\q\l\4\i\m\p\j\x\9\0\5\m\1\5\6\m\3\7\e\g\3\k\p\x\y\f\t\2\v\6\t\4\y\f\q\0\o\g\2\n\f\8\n\e\p\e\f\n\i\1\t\j\j\l\h\d\y\1\c\8\r\r\m\c\v\o\1\t\b\8\x\7\s\6\e\5\1\4\b\s\d\l\r\o\h\c\4\u\0\q\d\p\2\8\5\3\l\4\h\s\4\8\a\e\h\r\h\a\0\1\g\f\s\p\p\a\6\4\n\x\s\3\i\o\d\7\3\6\e\q\t\z\o\4\r\c\6\h\4\1\k\p\6\x\m\7\l\z\o\4\r\b\2\8\8\h\n\2\u\8\8\d\1\c\r\p\g\b\x\g\8\x\4\f\6\m\1\n\8\n\3\4\2\c\n\b\x\5\x\5\w\1\n\e\0\y\x\4\8\4\i\1\j\9\9\u\i\z\6\3\f\1\k\l\d\a\o\m\d\e\a\j\s\x\t\o\6\d\a\4\k\m\e\b\g\y\u\8\5\0\n\b\c\5\c\a\9\k\m\0\a\d\8\z\a\x\9\l\7\h\2\i\9\y\5\v\0\m\y\n\p\d\w\9\z\u\6\n\t\4\o\m\k\c\d\k\m\w\y\0\q\1\s\s\q\9\5\x\q\o\d\l\4\q\5\2\p\6\6\6\6\7\9\l\x\e\j\6\6\d\3\6\z\h\z\b\4\1\n\2\0\s\b\0\k\1\h\p\q\d\6\0\0\e\k\8\p\x\i\v\4\8\b\k\m\b\7\e\6\9\h\v\a\8\g\0\9\8\i\6\m\l\i\2\w\4\2\m\t\u\j\s\u\d\4\q\g\u\j\b\t\4\w\y\i\c\r\z\i\x\t\a\2\t\8\5\0\p\5\t\k\u\k\k\a\o\o\l\y\j\e\5\2\w\z\m\n\s\w\j\7\6\w\h\g\x\s\7\1\k\b\k\h\d\k\i\4\a\f\l\e\i\f\j\l\2\n\m\0\e\1\g\o\z\t\s\5\w\x\1\z\i\z\1\r\6\g\u\0\k\h\s\0\k\p\h\f\m\t\3\b\l\t\g\j\9\s\r\1\h\b\h\9\0\k\9\x\m\y\8\9\0\l\m\l\i\6\v\9\y\2\j\l\a\i\9\x\j\n\m\y\b\z\3\9\u\f\j\u\y\9\x\n\p\f\c\t\n\j\k\a\j\8\1\x\x\1\c\v\k\7\s\0\9\f\w\l\x\l\4\l\l\g\v\3\y\h\2\h\5\n\o\9\x\v\b\i\7\2\d\e\r\3\o\3\j\l\f\8\s\w\4\d\2\o\r\n\t\1\n\w\x\m\d\u\q\7\r\z\g\z\6\a\x\k\d\2\9\c\p\m\m\e\0\2\7\k\c\z\a\e\2\3\1\k\i\o\k\i\t\j\q\q\y\3\v\w\y\d\8\g\n\c\h\r\b\8\7\c\g\p\h\g\q\o\f\e\g\a\p\c\r\q\x\g\9\m\x\9\1\7\c\t\b\f\n\q\z\d\x\w\1\p\8\7\x\n\x\m\z\d\w\6\f\5\c\5\3\0\p\d\i\n\h\8\r\2\d\m\r\x\j\0\w\j\m\n\9\0\1\8\n\a\l\m\q\h\v\m\0\l\k\d\s\s\9\h\3\b\7\s\z\9\w\2\p\p\k\t\g\f\u\z\5\f\g\j\x\p\o\y\t\g\q\x\5\i\2\v\1\a\4\e\2\w\x\h\h\9\u\6\p\5\c\w\o\f\u\6\z\y\o\f\s\3\t\g\9\5\y\3\u\q\7\w\3\w\7\n\x\k\h\i\f\3\j\g\p\c\i\6\e\9\c\r\8\u\d\e\0\e\e\y\8\d\g\6\3\3\r\m\d\s\b\l\n\2\j\5\q\t\b\l\s\6\2\8\n\w\j\5\n\q\j\3\7\0\j\6\f\6\i\2\3\r\l\u\7\i\6\v\d\4\g\h\5\1\7\y\z\2\k\r\e\f\j\f\i\2\s\d\3\m\l\a\2\0\7\d\c\f\d\z\5\7\k\9\d\k\v\q\h\f\g\u\1\e\1\b\n\w\v\m\p\v\d\c\i\c\8\q\v\0\a\f\l\r\x\0\8\l\1\3\h\s\c\j\k\y\k\5\i\9\p\z\y\z\8\t\v\2\b\n\d\0\s\h\p\t\p\z\d\v\p\h\w\k\a\l\c\6\q\o\h\6\m\t\4\o\e\8\f\l\z\s\u\h\6\k\x\c\w\8\a\7\d\9\w\z\6\z\o\6\4\p\7\6\v\l\l\i\n\4\h\y\i\h\l\l\3\q\c\t\w\h\0\8\4\w\f\3\q\3\b\2\z\l\i\r\g\7\v\y\u\4\q\r\i\9\l\q\d\z\l\b\c\j\d\d\t\e\l\0\u\c\2\8\u\4\j\h\n\o\4\w\s\p\m\x\b\v\w\l\s\n\q\c\7\9\i\2\0\m\l\e\0\5\2\m\z\o\5\d\c\1\j\x\1\0\z\2\l\c\7\4\y\c\x\u\0\1\j\r\x\t\t\4\2\7\p\3\2\m\8\s\k\x\c\v\k\c\q\j\x\b\c\k\s\m\a\a\x\i\n\x\c\1\v\0\w\m\8\0\2\8\x\6\m\u\t\z\3\0\t\s\f\2\x\b\4\i\d\j\7\3\d\d\9\b\k\j\w\7\f\m\s\z\f\1\m\e\j\s\r\1\6\u\f\5\3\u\n\9\c\u\n\j\5\t\0\g\a\w\b\s\q\7\i\7\w\r\c\u\o\g\j\e\7\2\x\h\x\5\5\w\p\s\5\2\5\3\f\4\w\h\d\f\y\3\x\j\4\w\3\4\r\r\1\l\n\n\h\7\8\y\l\7\3\i\k\h\z\j\e\2\m\7\3\1\1\t\1\c\x\o\l\u\7\m\s\h\k\d\9\5\9\4\d\o\o\0\0\v\c\8\s\n\y\p\g\9\k\6\q\h\b\x\6\l\j\1\d\y\n\x\2\x\z\x\d\k\x\i\c\g\o\2\g\l\1\o\n\b\t\z\g\y\h\a\p\i\2\j\b\z\d\1\z\u\b\m\x\5\1\x\j\s\1\k\m\1\y\a\k\c\u\j\k\e\8\c\e\0\4\z\1\r\z\l\9\m\k\4\w\n\u\w\o\h\q\m\w\0\s\n\m\s\7\s\q\w\y\v\0\t\a\y\b\u\h\k\k\n\n\8\y\y\m\o\q\w\t\6\5\4\q\r\h\5\b\g\h\e\q\f\j\v\3\v\5\1\c\5\i\f\9\c\a\0\5\m\g\v\n\w\f\b\0\g\6\l\b\e\n\a\r\f\r\u\6\s\i\9\r\f\a\e\0\g\4\o\x\1\t\e\k\a\s\b\x\4\3\v\k\h\m\q\c\0\o\g\w\x\0\7\k\b\k\c\t\x\5\p\s\g\h\g\z\t\c\8\p\p\1\x\t\e\t\5\u\5\6\d\o\s\n\q\z\f\c\l\9\4\5\t\z\b\4\p\v\3\0\r\t\6\l\d\0\e\a\m\r\i\4\4\f\0\j\u\l\x\m\r\f\p\7\6\s\h\a\n\n\c\9\8\k\6\a\q\r\b\v\i\z\j\g\q\5\y\j\y\e\l\t\2\o\t\i\c\o\g\b\9\j\0\4\h\f\p\g\p\z\b\s\7\5\9\u\p\v\0\x\j\x\6\q\2\w\x\8\o\w\6\j\d\d\s\x\4\5\7\w\s\4\9\s\0\w\3\6\c\f\x\a\x\u\w\0\k\e\v\9\s\0\9\5\e\d\2\6\e\r\w\g\1\f\d\t\3\9\j\k\d\8\h\y\v\7\e\x\c\z\h\r\e\o\l\y\x\q\w\i\6\1\y\e\w\i\v\h\w\h\m\r\f\8\j\u\4\k\i\4\o\5\7\2\c\3\w\a\u\t\0\0\8\l\s\t\m\3\0\p\g\v\a\l\2\l\y\c\t\7\q\p\p\a\e\r\o\9\r\j\k\r\v\u\i\5\t\i\i\q\q\7\c\6\r\h\8\b\d\8\r\q\4\9\a\n\r\6\z\c\x\k\q\h\4\m\b\s\z\c\x\2\r\h\f\l\8\7\g\r\r\h\b\6\f\z\f\k\r\a\a\v\k\g\c\0\a\j\v\x\2\w\x\i\5\h\q\b\m\o\o\a\k\5\i\8\7\d\y\8\c\m\e\7\a\x\9\4\w\t\4\e\f\6\o\r\l\t\l\f\7\r\h\4\y\i\b\y\r\m\7\f\k\m\l\t\x\w\1\m\h\r\k\a\h\u\c\9\5\u\5\l\n\8\2\t\n\c\l\4\n\g\0\e\g\x\7\u\a\h\0\0\n\z\6\2\c\o\p\m\0\e\l\m\0\e\y\7\p\b\m\i\o\z\a\1\k\h\q\a\2\k\7\l\l\n\7\7\b\9\o\8\z\z\q\q\f\j\3\v\v\u\9\b\w\d\3\n\0\m\x\a\y\b\4\p\6\3\9\i\y\2\j\6\y\g\t\v\2\c\f\f\n\x\i\a\v\c\n\d\x\4\b\3\r\n\9\l\s\j\3\n\i\w\c\l\c\e\9\b\h\r\q\z\b\k\i\y\k\k\w\v\y\x\4\t\k\g\j\7\9\e\t\c\z\i\c\0\u\p\0\8\9\5\y\h\3\g\c\y\x\y\u\5\0\z\b\z\p\z\r\p\b\9\4\1\a\0\i\j\w\n\8\4\5\3\q\j\m\j\z\t\3\0\c\o\c\e\c\i\m\m\s\4\8\p\y\g\i\s\e\i\z\d\r\g\h\5\p\w\m\n\b\d\f\a\4\f\e\q\c\a\2\o\j\n\r\x\1\q\5\n\h\c\z\m\l\x\y\i\s\d\5\6\8\j\q\9\z ]] 00:29:55.672 00:29:55.672 real 0m1.575s 00:29:55.672 user 0m1.014s 00:29:55.672 sys 0m0.415s 00:29:55.672 12:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:55.672 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:29:55.672 ************************************ 00:29:55.672 END TEST dd_rw_offset 00:29:55.672 ************************************ 00:29:55.672 12:14:01 -- dd/basic_rw.sh@1 -- # cleanup 00:29:55.672 12:14:01 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:29:55.672 12:14:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:55.672 12:14:01 -- dd/common.sh@11 -- # local nvme_ref= 00:29:55.672 12:14:01 -- dd/common.sh@12 -- # local size=0xffff 00:29:55.672 12:14:01 -- dd/common.sh@14 -- # local bs=1048576 00:29:55.672 12:14:01 -- dd/common.sh@15 -- # local count=1 00:29:55.672 12:14:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:55.672 12:14:01 -- dd/common.sh@18 -- # gen_conf 00:29:55.672 12:14:01 -- dd/common.sh@31 -- # xtrace_disable 00:29:55.672 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:29:55.672 { 00:29:55.672 "subsystems": [ 00:29:55.672 { 00:29:55.672 "subsystem": "bdev", 00:29:55.672 "config": [ 00:29:55.672 { 00:29:55.672 "params": { 00:29:55.672 "trtype": "pcie", 00:29:55.672 "traddr": "0000:00:06.0", 00:29:55.672 "name": "Nvme0" 00:29:55.672 }, 00:29:55.672 "method": "bdev_nvme_attach_controller" 00:29:55.672 }, 00:29:55.672 { 00:29:55.672 "method": "bdev_wait_for_examine" 00:29:55.672 } 00:29:55.672 ] 00:29:55.672 } 00:29:55.672 ] 00:29:55.672 } 00:29:55.672 [2024-11-29 12:14:01.175483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:55.672 [2024-11-29 12:14:01.175749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145305 ] 00:29:56.022 [2024-11-29 12:14:01.323577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.022 [2024-11-29 12:14:01.418675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.281  [2024-11-29T12:14:02.051Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:29:56.540 00:29:56.540 12:14:01 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:56.540 00:29:56.540 real 0m21.181s 00:29:56.540 user 0m14.246s 00:29:56.540 sys 0m5.181s 00:29:56.540 12:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:56.540 ************************************ 00:29:56.540 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:29:56.540 END TEST spdk_dd_basic_rw 00:29:56.540 ************************************ 00:29:56.540 12:14:01 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:29:56.540 12:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:56.540 12:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:56.540 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:29:56.540 ************************************ 00:29:56.540 START TEST spdk_dd_posix 00:29:56.540 ************************************ 00:29:56.540 12:14:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:29:56.540 * Looking for test storage... 00:29:56.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:56.540 12:14:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:56.540 12:14:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:56.540 12:14:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:56.799 12:14:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:56.799 12:14:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:56.799 12:14:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:56.799 12:14:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:56.799 12:14:02 -- scripts/common.sh@335 -- # IFS=.-: 00:29:56.799 12:14:02 -- scripts/common.sh@335 -- # read -ra ver1 00:29:56.799 12:14:02 -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.799 12:14:02 -- scripts/common.sh@336 -- # read -ra ver2 00:29:56.799 12:14:02 -- scripts/common.sh@337 -- # local 'op=<' 00:29:56.799 12:14:02 -- scripts/common.sh@339 -- # ver1_l=2 00:29:56.799 12:14:02 -- scripts/common.sh@340 -- # ver2_l=1 00:29:56.799 12:14:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:56.799 12:14:02 -- scripts/common.sh@343 -- # case "$op" in 00:29:56.799 12:14:02 -- scripts/common.sh@344 -- # : 1 00:29:56.799 12:14:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:56.799 12:14:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.799 12:14:02 -- scripts/common.sh@364 -- # decimal 1 00:29:56.799 12:14:02 -- scripts/common.sh@352 -- # local d=1 00:29:56.799 12:14:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.799 12:14:02 -- scripts/common.sh@354 -- # echo 1 00:29:56.799 12:14:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:56.799 12:14:02 -- scripts/common.sh@365 -- # decimal 2 00:29:56.799 12:14:02 -- scripts/common.sh@352 -- # local d=2 00:29:56.799 12:14:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.799 12:14:02 -- scripts/common.sh@354 -- # echo 2 00:29:56.799 12:14:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:56.799 12:14:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:56.799 12:14:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:56.799 12:14:02 -- scripts/common.sh@367 -- # return 0 00:29:56.799 12:14:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.799 12:14:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.799 --rc genhtml_branch_coverage=1 00:29:56.799 --rc genhtml_function_coverage=1 00:29:56.799 --rc genhtml_legend=1 00:29:56.799 --rc geninfo_all_blocks=1 00:29:56.799 --rc geninfo_unexecuted_blocks=1 00:29:56.799 00:29:56.799 ' 00:29:56.799 12:14:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.799 --rc genhtml_branch_coverage=1 00:29:56.799 --rc genhtml_function_coverage=1 00:29:56.799 --rc genhtml_legend=1 00:29:56.799 --rc geninfo_all_blocks=1 00:29:56.799 --rc geninfo_unexecuted_blocks=1 00:29:56.799 00:29:56.799 ' 00:29:56.799 12:14:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.799 --rc genhtml_branch_coverage=1 00:29:56.799 --rc genhtml_function_coverage=1 00:29:56.799 --rc genhtml_legend=1 00:29:56.799 --rc geninfo_all_blocks=1 00:29:56.799 --rc geninfo_unexecuted_blocks=1 00:29:56.799 00:29:56.799 ' 00:29:56.799 12:14:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.799 --rc genhtml_branch_coverage=1 00:29:56.799 --rc genhtml_function_coverage=1 00:29:56.799 --rc genhtml_legend=1 00:29:56.799 --rc geninfo_all_blocks=1 00:29:56.799 --rc geninfo_unexecuted_blocks=1 00:29:56.799 00:29:56.799 ' 00:29:56.799 12:14:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:56.799 12:14:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.799 12:14:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.799 12:14:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.799 12:14:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:56.800 12:14:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:56.800 12:14:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:56.800 12:14:02 -- paths/export.sh@5 -- # export PATH 00:29:56.800 12:14:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:56.800 12:14:02 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:29:56.800 12:14:02 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:29:56.800 12:14:02 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:29:56.800 12:14:02 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:29:56.800 12:14:02 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:56.800 12:14:02 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:56.800 12:14:02 -- dd/posix.sh@130 -- # tests 00:29:56.800 12:14:02 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:29:56.800 * First test run, using AIO 00:29:56.800 12:14:02 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:29:56.800 12:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:56.800 12:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:56.800 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:56.800 ************************************ 00:29:56.800 START TEST dd_flag_append 00:29:56.800 ************************************ 00:29:56.800 12:14:02 -- common/autotest_common.sh@1114 -- # append 00:29:56.800 12:14:02 -- dd/posix.sh@16 -- # local dump0 00:29:56.800 12:14:02 -- dd/posix.sh@17 -- # local dump1 00:29:56.800 12:14:02 -- dd/posix.sh@19 -- # gen_bytes 32 00:29:56.800 12:14:02 -- dd/common.sh@98 -- # xtrace_disable 00:29:56.800 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:56.800 12:14:02 -- dd/posix.sh@19 -- # dump0=mvbqip3pm0n3tx45wzzo3gkhnmauva58 00:29:56.800 12:14:02 -- dd/posix.sh@20 -- # gen_bytes 32 00:29:56.800 12:14:02 -- dd/common.sh@98 -- # xtrace_disable 00:29:56.800 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:56.800 12:14:02 -- dd/posix.sh@20 -- # dump1=3qhnlsr8vmq410xd4njr6f5r1khgcevk 00:29:56.800 12:14:02 -- dd/posix.sh@22 -- # printf %s mvbqip3pm0n3tx45wzzo3gkhnmauva58 00:29:56.800 12:14:02 -- dd/posix.sh@23 -- # printf %s 3qhnlsr8vmq410xd4njr6f5r1khgcevk 00:29:56.800 12:14:02 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:29:56.800 [2024-11-29 12:14:02.207231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:56.800 [2024-11-29 12:14:02.207968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145384 ] 00:29:57.058 [2024-11-29 12:14:02.353214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.058 [2024-11-29 12:14:02.455098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.058  [2024-11-29T12:14:03.134Z] Copying: 32/32 [B] (average 31 kBps) 00:29:57.623 00:29:57.623 12:14:02 -- dd/posix.sh@27 -- # [[ 3qhnlsr8vmq410xd4njr6f5r1khgcevkmvbqip3pm0n3tx45wzzo3gkhnmauva58 == \3\q\h\n\l\s\r\8\v\m\q\4\1\0\x\d\4\n\j\r\6\f\5\r\1\k\h\g\c\e\v\k\m\v\b\q\i\p\3\p\m\0\n\3\t\x\4\5\w\z\z\o\3\g\k\h\n\m\a\u\v\a\5\8 ]] 00:29:57.623 00:29:57.623 real 0m0.688s 00:29:57.623 user 0m0.333s 00:29:57.623 sys 0m0.214s 00:29:57.623 12:14:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:57.623 ************************************ 00:29:57.623 END TEST dd_flag_append 00:29:57.623 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:57.623 ************************************ 00:29:57.623 12:14:02 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:29:57.623 12:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:57.623 12:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:57.623 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:57.623 ************************************ 00:29:57.623 START TEST dd_flag_directory 00:29:57.623 ************************************ 00:29:57.623 12:14:02 -- common/autotest_common.sh@1114 -- # directory 00:29:57.623 12:14:02 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:57.623 12:14:02 -- common/autotest_common.sh@650 -- # local es=0 00:29:57.623 12:14:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:57.623 12:14:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:57.623 12:14:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.623 12:14:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:57.623 12:14:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.623 12:14:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:57.623 12:14:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.623 12:14:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:57.623 12:14:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:57.623 12:14:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:57.623 [2024-11-29 12:14:02.942903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:57.623 [2024-11-29 12:14:02.943377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145417 ] 00:29:57.623 [2024-11-29 12:14:03.090909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.881 [2024-11-29 12:14:03.198980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.881 [2024-11-29 12:14:03.294606] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:57.881 [2024-11-29 12:14:03.294731] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:57.881 [2024-11-29 12:14:03.294770] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:58.140 [2024-11-29 12:14:03.425163] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:58.140 12:14:03 -- common/autotest_common.sh@653 -- # es=236 00:29:58.140 12:14:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:58.140 12:14:03 -- common/autotest_common.sh@662 -- # es=108 00:29:58.140 12:14:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:58.140 12:14:03 -- common/autotest_common.sh@670 -- # es=1 00:29:58.140 12:14:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:58.140 12:14:03 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:58.140 12:14:03 -- common/autotest_common.sh@650 -- # local es=0 00:29:58.140 12:14:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:58.140 12:14:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.140 12:14:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:58.140 12:14:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.140 12:14:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:58.140 12:14:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.140 12:14:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:58.140 12:14:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.140 12:14:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:58.140 12:14:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:58.140 [2024-11-29 12:14:03.608797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:58.140 [2024-11-29 12:14:03.609060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145432 ] 00:29:58.399 [2024-11-29 12:14:03.755422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.399 [2024-11-29 12:14:03.850860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.657 [2024-11-29 12:14:03.938916] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:58.657 [2024-11-29 12:14:03.939028] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:58.657 [2024-11-29 12:14:03.939069] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:58.657 [2024-11-29 12:14:04.066184] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:58.916 12:14:04 -- common/autotest_common.sh@653 -- # es=236 00:29:58.916 12:14:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:58.916 12:14:04 -- common/autotest_common.sh@662 -- # es=108 00:29:58.916 12:14:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:58.916 12:14:04 -- common/autotest_common.sh@670 -- # es=1 00:29:58.916 12:14:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:58.916 00:29:58.916 real 0m1.291s 00:29:58.916 user 0m0.695s 00:29:58.916 sys 0m0.395s 00:29:58.916 ************************************ 00:29:58.916 END TEST dd_flag_directory 00:29:58.916 ************************************ 00:29:58.916 12:14:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:58.916 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:29:58.916 12:14:04 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:29:58.916 12:14:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:58.916 12:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:58.916 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:29:58.916 ************************************ 00:29:58.916 START TEST dd_flag_nofollow 00:29:58.916 ************************************ 00:29:58.916 12:14:04 -- common/autotest_common.sh@1114 -- # nofollow 00:29:58.916 12:14:04 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:58.916 12:14:04 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:58.916 12:14:04 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:58.916 12:14:04 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:58.916 12:14:04 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:58.916 12:14:04 -- common/autotest_common.sh@650 -- # local es=0 00:29:58.916 12:14:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:58.916 12:14:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.916 12:14:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:58.916 12:14:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.916 12:14:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:58.916 12:14:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.916 12:14:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:58.916 12:14:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:58.916 12:14:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:58.916 12:14:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:58.916 [2024-11-29 12:14:04.307136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:58.916 [2024-11-29 12:14:04.307457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145470 ] 00:29:59.175 [2024-11-29 12:14:04.459221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.175 [2024-11-29 12:14:04.554883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.175 [2024-11-29 12:14:04.642784] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:29:59.175 [2024-11-29 12:14:04.642912] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:29:59.175 [2024-11-29 12:14:04.642955] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:59.433 [2024-11-29 12:14:04.769145] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:59.433 12:14:04 -- common/autotest_common.sh@653 -- # es=216 00:29:59.433 12:14:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:59.433 12:14:04 -- common/autotest_common.sh@662 -- # es=88 00:29:59.433 12:14:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:59.433 12:14:04 -- common/autotest_common.sh@670 -- # es=1 00:29:59.433 12:14:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:59.433 12:14:04 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:59.433 12:14:04 -- common/autotest_common.sh@650 -- # local es=0 00:29:59.433 12:14:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:59.433 12:14:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.433 12:14:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.433 12:14:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.433 12:14:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.433 12:14:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.433 12:14:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.433 12:14:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.433 12:14:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:59.433 12:14:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:59.433 [2024-11-29 12:14:04.939252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:59.433 [2024-11-29 12:14:04.939474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145475 ] 00:29:59.691 [2024-11-29 12:14:05.078263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.691 [2024-11-29 12:14:05.173770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.958 [2024-11-29 12:14:05.261853] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:29:59.958 [2024-11-29 12:14:05.261960] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:29:59.958 [2024-11-29 12:14:05.262008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:59.958 [2024-11-29 12:14:05.388388] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:00.215 12:14:05 -- common/autotest_common.sh@653 -- # es=216 00:30:00.216 12:14:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:00.216 12:14:05 -- common/autotest_common.sh@662 -- # es=88 00:30:00.216 12:14:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:00.216 12:14:05 -- common/autotest_common.sh@670 -- # es=1 00:30:00.216 12:14:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:00.216 12:14:05 -- dd/posix.sh@46 -- # gen_bytes 512 00:30:00.216 12:14:05 -- dd/common.sh@98 -- # xtrace_disable 00:30:00.216 12:14:05 -- common/autotest_common.sh@10 -- # set +x 00:30:00.216 12:14:05 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:00.216 [2024-11-29 12:14:05.578190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:00.216 [2024-11-29 12:14:05.578527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145494 ] 00:30:00.473 [2024-11-29 12:14:05.734277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.473 [2024-11-29 12:14:05.829195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.473  [2024-11-29T12:14:06.242Z] Copying: 512/512 [B] (average 500 kBps) 00:30:00.731 00:30:00.731 12:14:06 -- dd/posix.sh@49 -- # [[ a9ocqu64atw6fdedi407q9rmgdo04626jzsd61rnxssn9kf7kux6i0o6z53bsj1lsbdfao1s0kqpdr84i8omuljxz4hg374ns89oyazabxkc57glk17ai9dxluqc2n4szksygycj6ha4arz8elwepx5byuzz1nory5qmjf0orxw9zydws2485p6ju3ietnoep4iiyyesfdhbm3ti2o1rcxvknbxhxzo3td5l2qu0d8de8jkz0xvuh84bhhuglbj08ogbssib3b13npojlp8csrozwy820hv6mz7t8q1k3sgl3lwfc3n6r37r9l688otih0oesamx13a86fl41qzhx7kkmeo6g3r7mkudkcbgswyzdjgrm2vmd8djc0kdkt1upavtfts93y6om644jht4mnrtzzw0m67kmnpxupag53exjqf5olwd9fl59a0hw80d4ioo2ydd0u7y9849w4v6ds3irtcvggyn6xxkd8omb5zrxtelrhz372wn1612r1jq == \a\9\o\c\q\u\6\4\a\t\w\6\f\d\e\d\i\4\0\7\q\9\r\m\g\d\o\0\4\6\2\6\j\z\s\d\6\1\r\n\x\s\s\n\9\k\f\7\k\u\x\6\i\0\o\6\z\5\3\b\s\j\1\l\s\b\d\f\a\o\1\s\0\k\q\p\d\r\8\4\i\8\o\m\u\l\j\x\z\4\h\g\3\7\4\n\s\8\9\o\y\a\z\a\b\x\k\c\5\7\g\l\k\1\7\a\i\9\d\x\l\u\q\c\2\n\4\s\z\k\s\y\g\y\c\j\6\h\a\4\a\r\z\8\e\l\w\e\p\x\5\b\y\u\z\z\1\n\o\r\y\5\q\m\j\f\0\o\r\x\w\9\z\y\d\w\s\2\4\8\5\p\6\j\u\3\i\e\t\n\o\e\p\4\i\i\y\y\e\s\f\d\h\b\m\3\t\i\2\o\1\r\c\x\v\k\n\b\x\h\x\z\o\3\t\d\5\l\2\q\u\0\d\8\d\e\8\j\k\z\0\x\v\u\h\8\4\b\h\h\u\g\l\b\j\0\8\o\g\b\s\s\i\b\3\b\1\3\n\p\o\j\l\p\8\c\s\r\o\z\w\y\8\2\0\h\v\6\m\z\7\t\8\q\1\k\3\s\g\l\3\l\w\f\c\3\n\6\r\3\7\r\9\l\6\8\8\o\t\i\h\0\o\e\s\a\m\x\1\3\a\8\6\f\l\4\1\q\z\h\x\7\k\k\m\e\o\6\g\3\r\7\m\k\u\d\k\c\b\g\s\w\y\z\d\j\g\r\m\2\v\m\d\8\d\j\c\0\k\d\k\t\1\u\p\a\v\t\f\t\s\9\3\y\6\o\m\6\4\4\j\h\t\4\m\n\r\t\z\z\w\0\m\6\7\k\m\n\p\x\u\p\a\g\5\3\e\x\j\q\f\5\o\l\w\d\9\f\l\5\9\a\0\h\w\8\0\d\4\i\o\o\2\y\d\d\0\u\7\y\9\8\4\9\w\4\v\6\d\s\3\i\r\t\c\v\g\g\y\n\6\x\x\k\d\8\o\m\b\5\z\r\x\t\e\l\r\h\z\3\7\2\w\n\1\6\1\2\r\1\j\q ]] 00:30:00.731 00:30:00.731 real 0m1.972s 00:30:00.731 user 0m1.016s 00:30:00.731 sys 0m0.618s 00:30:00.731 ************************************ 00:30:00.731 END TEST dd_flag_nofollow 00:30:00.731 ************************************ 00:30:00.731 12:14:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:00.731 12:14:06 -- common/autotest_common.sh@10 -- # set +x 00:30:00.989 12:14:06 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:30:00.989 12:14:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:00.989 12:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:00.989 12:14:06 -- common/autotest_common.sh@10 -- # set +x 00:30:00.989 ************************************ 00:30:00.989 START TEST dd_flag_noatime 00:30:00.989 ************************************ 00:30:00.989 12:14:06 -- common/autotest_common.sh@1114 -- # noatime 00:30:00.989 12:14:06 -- dd/posix.sh@53 -- # local atime_if 00:30:00.989 12:14:06 -- dd/posix.sh@54 -- # local atime_of 00:30:00.989 12:14:06 -- dd/posix.sh@58 -- # gen_bytes 512 00:30:00.989 12:14:06 -- dd/common.sh@98 -- # xtrace_disable 00:30:00.989 12:14:06 -- common/autotest_common.sh@10 -- # set +x 00:30:00.989 12:14:06 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:00.989 12:14:06 -- dd/posix.sh@60 -- # atime_if=1732882445 00:30:00.989 12:14:06 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:00.989 12:14:06 -- dd/posix.sh@61 -- # atime_of=1732882446 00:30:00.989 12:14:06 -- dd/posix.sh@66 -- # sleep 1 00:30:01.922 12:14:07 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:01.922 [2024-11-29 12:14:07.336949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:01.922 [2024-11-29 12:14:07.337245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145548 ] 00:30:02.179 [2024-11-29 12:14:07.494117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.179 [2024-11-29 12:14:07.598031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.179  [2024-11-29T12:14:08.256Z] Copying: 512/512 [B] (average 500 kBps) 00:30:02.745 00:30:02.745 12:14:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:02.745 12:14:07 -- dd/posix.sh@69 -- # (( atime_if == 1732882445 )) 00:30:02.745 12:14:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:02.745 12:14:07 -- dd/posix.sh@70 -- # (( atime_of == 1732882446 )) 00:30:02.745 12:14:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:02.745 [2024-11-29 12:14:08.053111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:02.745 [2024-11-29 12:14:08.053458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145559 ] 00:30:02.745 [2024-11-29 12:14:08.211619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.002 [2024-11-29 12:14:08.314392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.002  [2024-11-29T12:14:08.771Z] Copying: 512/512 [B] (average 500 kBps) 00:30:03.260 00:30:03.260 12:14:08 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:03.260 12:14:08 -- dd/posix.sh@73 -- # (( atime_if < 1732882448 )) 00:30:03.260 00:30:03.260 real 0m2.449s 00:30:03.260 user 0m0.780s 00:30:03.260 sys 0m0.395s 00:30:03.260 12:14:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:03.260 ************************************ 00:30:03.260 END TEST dd_flag_noatime 00:30:03.260 ************************************ 00:30:03.260 12:14:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.260 12:14:08 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:30:03.260 12:14:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:03.260 12:14:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:03.260 12:14:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.260 ************************************ 00:30:03.260 START TEST dd_flags_misc 00:30:03.260 ************************************ 00:30:03.260 12:14:08 -- common/autotest_common.sh@1114 -- # io 00:30:03.260 12:14:08 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:30:03.260 12:14:08 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:30:03.260 12:14:08 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:30:03.260 12:14:08 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:03.260 12:14:08 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:03.260 12:14:08 -- dd/common.sh@98 -- # xtrace_disable 00:30:03.260 12:14:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.260 12:14:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:03.260 12:14:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:03.517 [2024-11-29 12:14:08.822303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:03.517 [2024-11-29 12:14:08.822640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145591 ] 00:30:03.517 [2024-11-29 12:14:08.972822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.775 [2024-11-29 12:14:09.075396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.775  [2024-11-29T12:14:09.543Z] Copying: 512/512 [B] (average 500 kBps) 00:30:04.032 00:30:04.032 12:14:09 -- dd/posix.sh@93 -- # [[ ase44evwlzs4dfzrudh9lm4zjae9kjy7hltshmqb2k8pp9xycexqycwj4rqcvws79yiynhezimiqle11mfdouzoey3m9ovomnsfsir6oueemhd8jkx8tw1vuig7rwo0h8blbh4ztc5mlr2sj3gtswxnufcycg0pe79jbinpssikor5ia31t3toe5nf6g7jb5ijsw2txssopu24wbppf1raoqacz6b73mhjvdswwlrlcocs6z2w2qx66vj46f7mookzhat1dzshwdvlrldn180412rz6t4cxyol427m0tlvkfed2f3znmmfcbofnu96ycpoy2rw322hod5mk0p12iuscna53dxiggxti2pn02zfajsbg6y6jjxdlbv06f3fk1giuz62emd7k5nxxcl1depbsa6s8gz39k9b6njfapna617ayxfittstpfkorakaci0nv8eo1hpchejvyj03dccpb5f58q3j0iknd3927olx0qgf93pv70a8e4kunwhi9l == \a\s\e\4\4\e\v\w\l\z\s\4\d\f\z\r\u\d\h\9\l\m\4\z\j\a\e\9\k\j\y\7\h\l\t\s\h\m\q\b\2\k\8\p\p\9\x\y\c\e\x\q\y\c\w\j\4\r\q\c\v\w\s\7\9\y\i\y\n\h\e\z\i\m\i\q\l\e\1\1\m\f\d\o\u\z\o\e\y\3\m\9\o\v\o\m\n\s\f\s\i\r\6\o\u\e\e\m\h\d\8\j\k\x\8\t\w\1\v\u\i\g\7\r\w\o\0\h\8\b\l\b\h\4\z\t\c\5\m\l\r\2\s\j\3\g\t\s\w\x\n\u\f\c\y\c\g\0\p\e\7\9\j\b\i\n\p\s\s\i\k\o\r\5\i\a\3\1\t\3\t\o\e\5\n\f\6\g\7\j\b\5\i\j\s\w\2\t\x\s\s\o\p\u\2\4\w\b\p\p\f\1\r\a\o\q\a\c\z\6\b\7\3\m\h\j\v\d\s\w\w\l\r\l\c\o\c\s\6\z\2\w\2\q\x\6\6\v\j\4\6\f\7\m\o\o\k\z\h\a\t\1\d\z\s\h\w\d\v\l\r\l\d\n\1\8\0\4\1\2\r\z\6\t\4\c\x\y\o\l\4\2\7\m\0\t\l\v\k\f\e\d\2\f\3\z\n\m\m\f\c\b\o\f\n\u\9\6\y\c\p\o\y\2\r\w\3\2\2\h\o\d\5\m\k\0\p\1\2\i\u\s\c\n\a\5\3\d\x\i\g\g\x\t\i\2\p\n\0\2\z\f\a\j\s\b\g\6\y\6\j\j\x\d\l\b\v\0\6\f\3\f\k\1\g\i\u\z\6\2\e\m\d\7\k\5\n\x\x\c\l\1\d\e\p\b\s\a\6\s\8\g\z\3\9\k\9\b\6\n\j\f\a\p\n\a\6\1\7\a\y\x\f\i\t\t\s\t\p\f\k\o\r\a\k\a\c\i\0\n\v\8\e\o\1\h\p\c\h\e\j\v\y\j\0\3\d\c\c\p\b\5\f\5\8\q\3\j\0\i\k\n\d\3\9\2\7\o\l\x\0\q\g\f\9\3\p\v\7\0\a\8\e\4\k\u\n\w\h\i\9\l ]] 00:30:04.032 12:14:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:04.032 12:14:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:30:04.032 [2024-11-29 12:14:09.529276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:04.032 [2024-11-29 12:14:09.529566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145612 ] 00:30:04.291 [2024-11-29 12:14:09.676464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.291 [2024-11-29 12:14:09.772293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.549  [2024-11-29T12:14:10.318Z] Copying: 512/512 [B] (average 500 kBps) 00:30:04.807 00:30:04.807 12:14:10 -- dd/posix.sh@93 -- # [[ ase44evwlzs4dfzrudh9lm4zjae9kjy7hltshmqb2k8pp9xycexqycwj4rqcvws79yiynhezimiqle11mfdouzoey3m9ovomnsfsir6oueemhd8jkx8tw1vuig7rwo0h8blbh4ztc5mlr2sj3gtswxnufcycg0pe79jbinpssikor5ia31t3toe5nf6g7jb5ijsw2txssopu24wbppf1raoqacz6b73mhjvdswwlrlcocs6z2w2qx66vj46f7mookzhat1dzshwdvlrldn180412rz6t4cxyol427m0tlvkfed2f3znmmfcbofnu96ycpoy2rw322hod5mk0p12iuscna53dxiggxti2pn02zfajsbg6y6jjxdlbv06f3fk1giuz62emd7k5nxxcl1depbsa6s8gz39k9b6njfapna617ayxfittstpfkorakaci0nv8eo1hpchejvyj03dccpb5f58q3j0iknd3927olx0qgf93pv70a8e4kunwhi9l == \a\s\e\4\4\e\v\w\l\z\s\4\d\f\z\r\u\d\h\9\l\m\4\z\j\a\e\9\k\j\y\7\h\l\t\s\h\m\q\b\2\k\8\p\p\9\x\y\c\e\x\q\y\c\w\j\4\r\q\c\v\w\s\7\9\y\i\y\n\h\e\z\i\m\i\q\l\e\1\1\m\f\d\o\u\z\o\e\y\3\m\9\o\v\o\m\n\s\f\s\i\r\6\o\u\e\e\m\h\d\8\j\k\x\8\t\w\1\v\u\i\g\7\r\w\o\0\h\8\b\l\b\h\4\z\t\c\5\m\l\r\2\s\j\3\g\t\s\w\x\n\u\f\c\y\c\g\0\p\e\7\9\j\b\i\n\p\s\s\i\k\o\r\5\i\a\3\1\t\3\t\o\e\5\n\f\6\g\7\j\b\5\i\j\s\w\2\t\x\s\s\o\p\u\2\4\w\b\p\p\f\1\r\a\o\q\a\c\z\6\b\7\3\m\h\j\v\d\s\w\w\l\r\l\c\o\c\s\6\z\2\w\2\q\x\6\6\v\j\4\6\f\7\m\o\o\k\z\h\a\t\1\d\z\s\h\w\d\v\l\r\l\d\n\1\8\0\4\1\2\r\z\6\t\4\c\x\y\o\l\4\2\7\m\0\t\l\v\k\f\e\d\2\f\3\z\n\m\m\f\c\b\o\f\n\u\9\6\y\c\p\o\y\2\r\w\3\2\2\h\o\d\5\m\k\0\p\1\2\i\u\s\c\n\a\5\3\d\x\i\g\g\x\t\i\2\p\n\0\2\z\f\a\j\s\b\g\6\y\6\j\j\x\d\l\b\v\0\6\f\3\f\k\1\g\i\u\z\6\2\e\m\d\7\k\5\n\x\x\c\l\1\d\e\p\b\s\a\6\s\8\g\z\3\9\k\9\b\6\n\j\f\a\p\n\a\6\1\7\a\y\x\f\i\t\t\s\t\p\f\k\o\r\a\k\a\c\i\0\n\v\8\e\o\1\h\p\c\h\e\j\v\y\j\0\3\d\c\c\p\b\5\f\5\8\q\3\j\0\i\k\n\d\3\9\2\7\o\l\x\0\q\g\f\9\3\p\v\7\0\a\8\e\4\k\u\n\w\h\i\9\l ]] 00:30:04.807 12:14:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:04.807 12:14:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:30:04.807 [2024-11-29 12:14:10.197153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:04.807 [2024-11-29 12:14:10.197375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145617 ] 00:30:05.065 [2024-11-29 12:14:10.339318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.065 [2024-11-29 12:14:10.436749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.065  [2024-11-29T12:14:10.834Z] Copying: 512/512 [B] (average 250 kBps) 00:30:05.323 00:30:05.581 12:14:10 -- dd/posix.sh@93 -- # [[ ase44evwlzs4dfzrudh9lm4zjae9kjy7hltshmqb2k8pp9xycexqycwj4rqcvws79yiynhezimiqle11mfdouzoey3m9ovomnsfsir6oueemhd8jkx8tw1vuig7rwo0h8blbh4ztc5mlr2sj3gtswxnufcycg0pe79jbinpssikor5ia31t3toe5nf6g7jb5ijsw2txssopu24wbppf1raoqacz6b73mhjvdswwlrlcocs6z2w2qx66vj46f7mookzhat1dzshwdvlrldn180412rz6t4cxyol427m0tlvkfed2f3znmmfcbofnu96ycpoy2rw322hod5mk0p12iuscna53dxiggxti2pn02zfajsbg6y6jjxdlbv06f3fk1giuz62emd7k5nxxcl1depbsa6s8gz39k9b6njfapna617ayxfittstpfkorakaci0nv8eo1hpchejvyj03dccpb5f58q3j0iknd3927olx0qgf93pv70a8e4kunwhi9l == \a\s\e\4\4\e\v\w\l\z\s\4\d\f\z\r\u\d\h\9\l\m\4\z\j\a\e\9\k\j\y\7\h\l\t\s\h\m\q\b\2\k\8\p\p\9\x\y\c\e\x\q\y\c\w\j\4\r\q\c\v\w\s\7\9\y\i\y\n\h\e\z\i\m\i\q\l\e\1\1\m\f\d\o\u\z\o\e\y\3\m\9\o\v\o\m\n\s\f\s\i\r\6\o\u\e\e\m\h\d\8\j\k\x\8\t\w\1\v\u\i\g\7\r\w\o\0\h\8\b\l\b\h\4\z\t\c\5\m\l\r\2\s\j\3\g\t\s\w\x\n\u\f\c\y\c\g\0\p\e\7\9\j\b\i\n\p\s\s\i\k\o\r\5\i\a\3\1\t\3\t\o\e\5\n\f\6\g\7\j\b\5\i\j\s\w\2\t\x\s\s\o\p\u\2\4\w\b\p\p\f\1\r\a\o\q\a\c\z\6\b\7\3\m\h\j\v\d\s\w\w\l\r\l\c\o\c\s\6\z\2\w\2\q\x\6\6\v\j\4\6\f\7\m\o\o\k\z\h\a\t\1\d\z\s\h\w\d\v\l\r\l\d\n\1\8\0\4\1\2\r\z\6\t\4\c\x\y\o\l\4\2\7\m\0\t\l\v\k\f\e\d\2\f\3\z\n\m\m\f\c\b\o\f\n\u\9\6\y\c\p\o\y\2\r\w\3\2\2\h\o\d\5\m\k\0\p\1\2\i\u\s\c\n\a\5\3\d\x\i\g\g\x\t\i\2\p\n\0\2\z\f\a\j\s\b\g\6\y\6\j\j\x\d\l\b\v\0\6\f\3\f\k\1\g\i\u\z\6\2\e\m\d\7\k\5\n\x\x\c\l\1\d\e\p\b\s\a\6\s\8\g\z\3\9\k\9\b\6\n\j\f\a\p\n\a\6\1\7\a\y\x\f\i\t\t\s\t\p\f\k\o\r\a\k\a\c\i\0\n\v\8\e\o\1\h\p\c\h\e\j\v\y\j\0\3\d\c\c\p\b\5\f\5\8\q\3\j\0\i\k\n\d\3\9\2\7\o\l\x\0\q\g\f\9\3\p\v\7\0\a\8\e\4\k\u\n\w\h\i\9\l ]] 00:30:05.581 12:14:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:05.581 12:14:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:30:05.581 [2024-11-29 12:14:10.895568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:05.581 [2024-11-29 12:14:10.896323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145634 ] 00:30:05.581 [2024-11-29 12:14:11.042109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.839 [2024-11-29 12:14:11.137401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.839  [2024-11-29T12:14:11.608Z] Copying: 512/512 [B] (average 250 kBps) 00:30:06.097 00:30:06.097 12:14:11 -- dd/posix.sh@93 -- # [[ ase44evwlzs4dfzrudh9lm4zjae9kjy7hltshmqb2k8pp9xycexqycwj4rqcvws79yiynhezimiqle11mfdouzoey3m9ovomnsfsir6oueemhd8jkx8tw1vuig7rwo0h8blbh4ztc5mlr2sj3gtswxnufcycg0pe79jbinpssikor5ia31t3toe5nf6g7jb5ijsw2txssopu24wbppf1raoqacz6b73mhjvdswwlrlcocs6z2w2qx66vj46f7mookzhat1dzshwdvlrldn180412rz6t4cxyol427m0tlvkfed2f3znmmfcbofnu96ycpoy2rw322hod5mk0p12iuscna53dxiggxti2pn02zfajsbg6y6jjxdlbv06f3fk1giuz62emd7k5nxxcl1depbsa6s8gz39k9b6njfapna617ayxfittstpfkorakaci0nv8eo1hpchejvyj03dccpb5f58q3j0iknd3927olx0qgf93pv70a8e4kunwhi9l == \a\s\e\4\4\e\v\w\l\z\s\4\d\f\z\r\u\d\h\9\l\m\4\z\j\a\e\9\k\j\y\7\h\l\t\s\h\m\q\b\2\k\8\p\p\9\x\y\c\e\x\q\y\c\w\j\4\r\q\c\v\w\s\7\9\y\i\y\n\h\e\z\i\m\i\q\l\e\1\1\m\f\d\o\u\z\o\e\y\3\m\9\o\v\o\m\n\s\f\s\i\r\6\o\u\e\e\m\h\d\8\j\k\x\8\t\w\1\v\u\i\g\7\r\w\o\0\h\8\b\l\b\h\4\z\t\c\5\m\l\r\2\s\j\3\g\t\s\w\x\n\u\f\c\y\c\g\0\p\e\7\9\j\b\i\n\p\s\s\i\k\o\r\5\i\a\3\1\t\3\t\o\e\5\n\f\6\g\7\j\b\5\i\j\s\w\2\t\x\s\s\o\p\u\2\4\w\b\p\p\f\1\r\a\o\q\a\c\z\6\b\7\3\m\h\j\v\d\s\w\w\l\r\l\c\o\c\s\6\z\2\w\2\q\x\6\6\v\j\4\6\f\7\m\o\o\k\z\h\a\t\1\d\z\s\h\w\d\v\l\r\l\d\n\1\8\0\4\1\2\r\z\6\t\4\c\x\y\o\l\4\2\7\m\0\t\l\v\k\f\e\d\2\f\3\z\n\m\m\f\c\b\o\f\n\u\9\6\y\c\p\o\y\2\r\w\3\2\2\h\o\d\5\m\k\0\p\1\2\i\u\s\c\n\a\5\3\d\x\i\g\g\x\t\i\2\p\n\0\2\z\f\a\j\s\b\g\6\y\6\j\j\x\d\l\b\v\0\6\f\3\f\k\1\g\i\u\z\6\2\e\m\d\7\k\5\n\x\x\c\l\1\d\e\p\b\s\a\6\s\8\g\z\3\9\k\9\b\6\n\j\f\a\p\n\a\6\1\7\a\y\x\f\i\t\t\s\t\p\f\k\o\r\a\k\a\c\i\0\n\v\8\e\o\1\h\p\c\h\e\j\v\y\j\0\3\d\c\c\p\b\5\f\5\8\q\3\j\0\i\k\n\d\3\9\2\7\o\l\x\0\q\g\f\9\3\p\v\7\0\a\8\e\4\k\u\n\w\h\i\9\l ]] 00:30:06.097 12:14:11 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:06.097 12:14:11 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:06.097 12:14:11 -- dd/common.sh@98 -- # xtrace_disable 00:30:06.097 12:14:11 -- common/autotest_common.sh@10 -- # set +x 00:30:06.097 12:14:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:06.097 12:14:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:06.097 [2024-11-29 12:14:11.595428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:06.097 [2024-11-29 12:14:11.595629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145646 ] 00:30:06.354 [2024-11-29 12:14:11.736947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.354 [2024-11-29 12:14:11.832920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.613  [2024-11-29T12:14:12.382Z] Copying: 512/512 [B] (average 500 kBps) 00:30:06.871 00:30:06.871 12:14:12 -- dd/posix.sh@93 -- # [[ i5q71x2gism2so6ecrknbd0qa287xx76tx6rfomqzvjb31u4vgl6s2dznle66yco6fs6c79o07uvfuat3l91kx3w1rrq5jgqfg8obm2h3ckb90jl8x31iq63e3hpn0qer3hl53kwpds1gdgcnjaax3yxwd7xiyi3bfgj48sibrcrepekfml2u0zidqwvjmuhrdqe6xll59wy1pfh6dl0jivbzu6agpn6uzr4tfmnjzv3egk8h0r56hossc3txvi56h93k3uji6kyesnnp3bf47bas7ugfqo42agw85irm9skfeoa2c9cvcpnjicfk76lgebimr2ze7vravj9a4588ouyhw9vfj5ws9822zrvvdgcszy2ynp891bhjtnrkpjc32liyvx5vg563ayzqxlt1a25zvlu1puc5no2btz4ghkac3d34czsp7n42obu6tiezx3jcip00hfhxe5z9y0msklny42t3yw2rgkeb456hb68armuzg22be8t57l2ii3b == \i\5\q\7\1\x\2\g\i\s\m\2\s\o\6\e\c\r\k\n\b\d\0\q\a\2\8\7\x\x\7\6\t\x\6\r\f\o\m\q\z\v\j\b\3\1\u\4\v\g\l\6\s\2\d\z\n\l\e\6\6\y\c\o\6\f\s\6\c\7\9\o\0\7\u\v\f\u\a\t\3\l\9\1\k\x\3\w\1\r\r\q\5\j\g\q\f\g\8\o\b\m\2\h\3\c\k\b\9\0\j\l\8\x\3\1\i\q\6\3\e\3\h\p\n\0\q\e\r\3\h\l\5\3\k\w\p\d\s\1\g\d\g\c\n\j\a\a\x\3\y\x\w\d\7\x\i\y\i\3\b\f\g\j\4\8\s\i\b\r\c\r\e\p\e\k\f\m\l\2\u\0\z\i\d\q\w\v\j\m\u\h\r\d\q\e\6\x\l\l\5\9\w\y\1\p\f\h\6\d\l\0\j\i\v\b\z\u\6\a\g\p\n\6\u\z\r\4\t\f\m\n\j\z\v\3\e\g\k\8\h\0\r\5\6\h\o\s\s\c\3\t\x\v\i\5\6\h\9\3\k\3\u\j\i\6\k\y\e\s\n\n\p\3\b\f\4\7\b\a\s\7\u\g\f\q\o\4\2\a\g\w\8\5\i\r\m\9\s\k\f\e\o\a\2\c\9\c\v\c\p\n\j\i\c\f\k\7\6\l\g\e\b\i\m\r\2\z\e\7\v\r\a\v\j\9\a\4\5\8\8\o\u\y\h\w\9\v\f\j\5\w\s\9\8\2\2\z\r\v\v\d\g\c\s\z\y\2\y\n\p\8\9\1\b\h\j\t\n\r\k\p\j\c\3\2\l\i\y\v\x\5\v\g\5\6\3\a\y\z\q\x\l\t\1\a\2\5\z\v\l\u\1\p\u\c\5\n\o\2\b\t\z\4\g\h\k\a\c\3\d\3\4\c\z\s\p\7\n\4\2\o\b\u\6\t\i\e\z\x\3\j\c\i\p\0\0\h\f\h\x\e\5\z\9\y\0\m\s\k\l\n\y\4\2\t\3\y\w\2\r\g\k\e\b\4\5\6\h\b\6\8\a\r\m\u\z\g\2\2\b\e\8\t\5\7\l\2\i\i\3\b ]] 00:30:06.871 12:14:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:06.871 12:14:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:30:06.871 [2024-11-29 12:14:12.285929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:06.871 [2024-11-29 12:14:12.286157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145656 ] 00:30:07.178 [2024-11-29 12:14:12.431935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.178 [2024-11-29 12:14:12.527425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.178  [2024-11-29T12:14:12.947Z] Copying: 512/512 [B] (average 500 kBps) 00:30:07.436 00:30:07.436 12:14:12 -- dd/posix.sh@93 -- # [[ i5q71x2gism2so6ecrknbd0qa287xx76tx6rfomqzvjb31u4vgl6s2dznle66yco6fs6c79o07uvfuat3l91kx3w1rrq5jgqfg8obm2h3ckb90jl8x31iq63e3hpn0qer3hl53kwpds1gdgcnjaax3yxwd7xiyi3bfgj48sibrcrepekfml2u0zidqwvjmuhrdqe6xll59wy1pfh6dl0jivbzu6agpn6uzr4tfmnjzv3egk8h0r56hossc3txvi56h93k3uji6kyesnnp3bf47bas7ugfqo42agw85irm9skfeoa2c9cvcpnjicfk76lgebimr2ze7vravj9a4588ouyhw9vfj5ws9822zrvvdgcszy2ynp891bhjtnrkpjc32liyvx5vg563ayzqxlt1a25zvlu1puc5no2btz4ghkac3d34czsp7n42obu6tiezx3jcip00hfhxe5z9y0msklny42t3yw2rgkeb456hb68armuzg22be8t57l2ii3b == \i\5\q\7\1\x\2\g\i\s\m\2\s\o\6\e\c\r\k\n\b\d\0\q\a\2\8\7\x\x\7\6\t\x\6\r\f\o\m\q\z\v\j\b\3\1\u\4\v\g\l\6\s\2\d\z\n\l\e\6\6\y\c\o\6\f\s\6\c\7\9\o\0\7\u\v\f\u\a\t\3\l\9\1\k\x\3\w\1\r\r\q\5\j\g\q\f\g\8\o\b\m\2\h\3\c\k\b\9\0\j\l\8\x\3\1\i\q\6\3\e\3\h\p\n\0\q\e\r\3\h\l\5\3\k\w\p\d\s\1\g\d\g\c\n\j\a\a\x\3\y\x\w\d\7\x\i\y\i\3\b\f\g\j\4\8\s\i\b\r\c\r\e\p\e\k\f\m\l\2\u\0\z\i\d\q\w\v\j\m\u\h\r\d\q\e\6\x\l\l\5\9\w\y\1\p\f\h\6\d\l\0\j\i\v\b\z\u\6\a\g\p\n\6\u\z\r\4\t\f\m\n\j\z\v\3\e\g\k\8\h\0\r\5\6\h\o\s\s\c\3\t\x\v\i\5\6\h\9\3\k\3\u\j\i\6\k\y\e\s\n\n\p\3\b\f\4\7\b\a\s\7\u\g\f\q\o\4\2\a\g\w\8\5\i\r\m\9\s\k\f\e\o\a\2\c\9\c\v\c\p\n\j\i\c\f\k\7\6\l\g\e\b\i\m\r\2\z\e\7\v\r\a\v\j\9\a\4\5\8\8\o\u\y\h\w\9\v\f\j\5\w\s\9\8\2\2\z\r\v\v\d\g\c\s\z\y\2\y\n\p\8\9\1\b\h\j\t\n\r\k\p\j\c\3\2\l\i\y\v\x\5\v\g\5\6\3\a\y\z\q\x\l\t\1\a\2\5\z\v\l\u\1\p\u\c\5\n\o\2\b\t\z\4\g\h\k\a\c\3\d\3\4\c\z\s\p\7\n\4\2\o\b\u\6\t\i\e\z\x\3\j\c\i\p\0\0\h\f\h\x\e\5\z\9\y\0\m\s\k\l\n\y\4\2\t\3\y\w\2\r\g\k\e\b\4\5\6\h\b\6\8\a\r\m\u\z\g\2\2\b\e\8\t\5\7\l\2\i\i\3\b ]] 00:30:07.436 12:14:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:07.437 12:14:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:30:07.695 [2024-11-29 12:14:12.994593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:07.695 [2024-11-29 12:14:12.995452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145672 ] 00:30:07.695 [2024-11-29 12:14:13.138807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.954 [2024-11-29 12:14:13.246095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.954  [2024-11-29T12:14:13.723Z] Copying: 512/512 [B] (average 100 kBps) 00:30:08.212 00:30:08.212 12:14:13 -- dd/posix.sh@93 -- # [[ i5q71x2gism2so6ecrknbd0qa287xx76tx6rfomqzvjb31u4vgl6s2dznle66yco6fs6c79o07uvfuat3l91kx3w1rrq5jgqfg8obm2h3ckb90jl8x31iq63e3hpn0qer3hl53kwpds1gdgcnjaax3yxwd7xiyi3bfgj48sibrcrepekfml2u0zidqwvjmuhrdqe6xll59wy1pfh6dl0jivbzu6agpn6uzr4tfmnjzv3egk8h0r56hossc3txvi56h93k3uji6kyesnnp3bf47bas7ugfqo42agw85irm9skfeoa2c9cvcpnjicfk76lgebimr2ze7vravj9a4588ouyhw9vfj5ws9822zrvvdgcszy2ynp891bhjtnrkpjc32liyvx5vg563ayzqxlt1a25zvlu1puc5no2btz4ghkac3d34czsp7n42obu6tiezx3jcip00hfhxe5z9y0msklny42t3yw2rgkeb456hb68armuzg22be8t57l2ii3b == \i\5\q\7\1\x\2\g\i\s\m\2\s\o\6\e\c\r\k\n\b\d\0\q\a\2\8\7\x\x\7\6\t\x\6\r\f\o\m\q\z\v\j\b\3\1\u\4\v\g\l\6\s\2\d\z\n\l\e\6\6\y\c\o\6\f\s\6\c\7\9\o\0\7\u\v\f\u\a\t\3\l\9\1\k\x\3\w\1\r\r\q\5\j\g\q\f\g\8\o\b\m\2\h\3\c\k\b\9\0\j\l\8\x\3\1\i\q\6\3\e\3\h\p\n\0\q\e\r\3\h\l\5\3\k\w\p\d\s\1\g\d\g\c\n\j\a\a\x\3\y\x\w\d\7\x\i\y\i\3\b\f\g\j\4\8\s\i\b\r\c\r\e\p\e\k\f\m\l\2\u\0\z\i\d\q\w\v\j\m\u\h\r\d\q\e\6\x\l\l\5\9\w\y\1\p\f\h\6\d\l\0\j\i\v\b\z\u\6\a\g\p\n\6\u\z\r\4\t\f\m\n\j\z\v\3\e\g\k\8\h\0\r\5\6\h\o\s\s\c\3\t\x\v\i\5\6\h\9\3\k\3\u\j\i\6\k\y\e\s\n\n\p\3\b\f\4\7\b\a\s\7\u\g\f\q\o\4\2\a\g\w\8\5\i\r\m\9\s\k\f\e\o\a\2\c\9\c\v\c\p\n\j\i\c\f\k\7\6\l\g\e\b\i\m\r\2\z\e\7\v\r\a\v\j\9\a\4\5\8\8\o\u\y\h\w\9\v\f\j\5\w\s\9\8\2\2\z\r\v\v\d\g\c\s\z\y\2\y\n\p\8\9\1\b\h\j\t\n\r\k\p\j\c\3\2\l\i\y\v\x\5\v\g\5\6\3\a\y\z\q\x\l\t\1\a\2\5\z\v\l\u\1\p\u\c\5\n\o\2\b\t\z\4\g\h\k\a\c\3\d\3\4\c\z\s\p\7\n\4\2\o\b\u\6\t\i\e\z\x\3\j\c\i\p\0\0\h\f\h\x\e\5\z\9\y\0\m\s\k\l\n\y\4\2\t\3\y\w\2\r\g\k\e\b\4\5\6\h\b\6\8\a\r\m\u\z\g\2\2\b\e\8\t\5\7\l\2\i\i\3\b ]] 00:30:08.212 12:14:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:08.212 12:14:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:30:08.212 [2024-11-29 12:14:13.700111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:08.212 [2024-11-29 12:14:13.700594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145685 ] 00:30:08.469 [2024-11-29 12:14:13.841851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.469 [2024-11-29 12:14:13.937620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.726  [2024-11-29T12:14:14.496Z] Copying: 512/512 [B] (average 250 kBps) 00:30:08.985 00:30:08.985 ************************************ 00:30:08.985 END TEST dd_flags_misc 00:30:08.985 ************************************ 00:30:08.985 12:14:14 -- dd/posix.sh@93 -- # [[ i5q71x2gism2so6ecrknbd0qa287xx76tx6rfomqzvjb31u4vgl6s2dznle66yco6fs6c79o07uvfuat3l91kx3w1rrq5jgqfg8obm2h3ckb90jl8x31iq63e3hpn0qer3hl53kwpds1gdgcnjaax3yxwd7xiyi3bfgj48sibrcrepekfml2u0zidqwvjmuhrdqe6xll59wy1pfh6dl0jivbzu6agpn6uzr4tfmnjzv3egk8h0r56hossc3txvi56h93k3uji6kyesnnp3bf47bas7ugfqo42agw85irm9skfeoa2c9cvcpnjicfk76lgebimr2ze7vravj9a4588ouyhw9vfj5ws9822zrvvdgcszy2ynp891bhjtnrkpjc32liyvx5vg563ayzqxlt1a25zvlu1puc5no2btz4ghkac3d34czsp7n42obu6tiezx3jcip00hfhxe5z9y0msklny42t3yw2rgkeb456hb68armuzg22be8t57l2ii3b == \i\5\q\7\1\x\2\g\i\s\m\2\s\o\6\e\c\r\k\n\b\d\0\q\a\2\8\7\x\x\7\6\t\x\6\r\f\o\m\q\z\v\j\b\3\1\u\4\v\g\l\6\s\2\d\z\n\l\e\6\6\y\c\o\6\f\s\6\c\7\9\o\0\7\u\v\f\u\a\t\3\l\9\1\k\x\3\w\1\r\r\q\5\j\g\q\f\g\8\o\b\m\2\h\3\c\k\b\9\0\j\l\8\x\3\1\i\q\6\3\e\3\h\p\n\0\q\e\r\3\h\l\5\3\k\w\p\d\s\1\g\d\g\c\n\j\a\a\x\3\y\x\w\d\7\x\i\y\i\3\b\f\g\j\4\8\s\i\b\r\c\r\e\p\e\k\f\m\l\2\u\0\z\i\d\q\w\v\j\m\u\h\r\d\q\e\6\x\l\l\5\9\w\y\1\p\f\h\6\d\l\0\j\i\v\b\z\u\6\a\g\p\n\6\u\z\r\4\t\f\m\n\j\z\v\3\e\g\k\8\h\0\r\5\6\h\o\s\s\c\3\t\x\v\i\5\6\h\9\3\k\3\u\j\i\6\k\y\e\s\n\n\p\3\b\f\4\7\b\a\s\7\u\g\f\q\o\4\2\a\g\w\8\5\i\r\m\9\s\k\f\e\o\a\2\c\9\c\v\c\p\n\j\i\c\f\k\7\6\l\g\e\b\i\m\r\2\z\e\7\v\r\a\v\j\9\a\4\5\8\8\o\u\y\h\w\9\v\f\j\5\w\s\9\8\2\2\z\r\v\v\d\g\c\s\z\y\2\y\n\p\8\9\1\b\h\j\t\n\r\k\p\j\c\3\2\l\i\y\v\x\5\v\g\5\6\3\a\y\z\q\x\l\t\1\a\2\5\z\v\l\u\1\p\u\c\5\n\o\2\b\t\z\4\g\h\k\a\c\3\d\3\4\c\z\s\p\7\n\4\2\o\b\u\6\t\i\e\z\x\3\j\c\i\p\0\0\h\f\h\x\e\5\z\9\y\0\m\s\k\l\n\y\4\2\t\3\y\w\2\r\g\k\e\b\4\5\6\h\b\6\8\a\r\m\u\z\g\2\2\b\e\8\t\5\7\l\2\i\i\3\b ]] 00:30:08.985 00:30:08.985 real 0m5.586s 00:30:08.985 user 0m2.983s 00:30:08.985 sys 0m1.479s 00:30:08.985 12:14:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:08.985 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:30:08.985 12:14:14 -- dd/posix.sh@131 -- # tests_forced_aio 00:30:08.985 12:14:14 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:30:08.985 * Second test run, using AIO 00:30:08.985 12:14:14 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:30:08.985 12:14:14 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:30:08.985 12:14:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:08.985 12:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:08.985 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:30:08.985 ************************************ 00:30:08.985 START TEST dd_flag_append_forced_aio 00:30:08.985 ************************************ 00:30:08.985 12:14:14 -- common/autotest_common.sh@1114 -- # append 00:30:08.985 12:14:14 -- dd/posix.sh@16 -- # local dump0 00:30:08.985 12:14:14 -- dd/posix.sh@17 -- # local dump1 00:30:08.985 12:14:14 -- dd/posix.sh@19 -- # gen_bytes 32 00:30:08.985 12:14:14 -- dd/common.sh@98 -- # xtrace_disable 00:30:08.985 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:30:08.985 12:14:14 -- dd/posix.sh@19 -- # dump0=i0kck33b4j59nhgz8rslm2m0bmeo3sq0 00:30:08.985 12:14:14 -- dd/posix.sh@20 -- # gen_bytes 32 00:30:08.985 12:14:14 -- dd/common.sh@98 -- # xtrace_disable 00:30:08.985 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:30:08.985 12:14:14 -- dd/posix.sh@20 -- # dump1=cnpzv5flt8dcukrhug6ynydkzbjto20i 00:30:08.985 12:14:14 -- dd/posix.sh@22 -- # printf %s i0kck33b4j59nhgz8rslm2m0bmeo3sq0 00:30:08.985 12:14:14 -- dd/posix.sh@23 -- # printf %s cnpzv5flt8dcukrhug6ynydkzbjto20i 00:30:08.985 12:14:14 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:30:08.985 [2024-11-29 12:14:14.462891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:08.985 [2024-11-29 12:14:14.463411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145724 ] 00:30:09.243 [2024-11-29 12:14:14.610045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.243 [2024-11-29 12:14:14.710903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.500  [2024-11-29T12:14:15.269Z] Copying: 32/32 [B] (average 31 kBps) 00:30:09.758 00:30:09.758 ************************************ 00:30:09.758 END TEST dd_flag_append_forced_aio 00:30:09.758 ************************************ 00:30:09.758 12:14:15 -- dd/posix.sh@27 -- # [[ cnpzv5flt8dcukrhug6ynydkzbjto20ii0kck33b4j59nhgz8rslm2m0bmeo3sq0 == \c\n\p\z\v\5\f\l\t\8\d\c\u\k\r\h\u\g\6\y\n\y\d\k\z\b\j\t\o\2\0\i\i\0\k\c\k\3\3\b\4\j\5\9\n\h\g\z\8\r\s\l\m\2\m\0\b\m\e\o\3\s\q\0 ]] 00:30:09.758 00:30:09.758 real 0m0.694s 00:30:09.758 user 0m0.375s 00:30:09.758 sys 0m0.180s 00:30:09.758 12:14:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:09.758 12:14:15 -- common/autotest_common.sh@10 -- # set +x 00:30:09.758 12:14:15 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:30:09.758 12:14:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:09.758 12:14:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.758 12:14:15 -- common/autotest_common.sh@10 -- # set +x 00:30:09.758 ************************************ 00:30:09.758 START TEST dd_flag_directory_forced_aio 00:30:09.758 ************************************ 00:30:09.758 12:14:15 -- common/autotest_common.sh@1114 -- # directory 00:30:09.758 12:14:15 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:09.758 12:14:15 -- common/autotest_common.sh@650 -- # local es=0 00:30:09.758 12:14:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:09.758 12:14:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:09.758 12:14:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:09.758 12:14:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:09.758 12:14:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:09.758 12:14:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:09.758 12:14:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:09.758 12:14:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:09.758 12:14:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:09.758 12:14:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:09.758 [2024-11-29 12:14:15.214997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:09.758 [2024-11-29 12:14:15.215514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145750 ] 00:30:10.016 [2024-11-29 12:14:15.363596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.016 [2024-11-29 12:14:15.459904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.274 [2024-11-29 12:14:15.550418] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:10.274 [2024-11-29 12:14:15.550913] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:10.274 [2024-11-29 12:14:15.551136] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:10.274 [2024-11-29 12:14:15.690106] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:10.533 12:14:15 -- common/autotest_common.sh@653 -- # es=236 00:30:10.533 12:14:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:10.533 12:14:15 -- common/autotest_common.sh@662 -- # es=108 00:30:10.533 12:14:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:10.533 12:14:15 -- common/autotest_common.sh@670 -- # es=1 00:30:10.533 12:14:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:10.533 12:14:15 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:10.533 12:14:15 -- common/autotest_common.sh@650 -- # local es=0 00:30:10.533 12:14:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:10.533 12:14:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:10.533 12:14:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.533 12:14:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:10.533 12:14:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.533 12:14:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:10.533 12:14:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.533 12:14:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:10.533 12:14:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:10.533 12:14:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:10.533 [2024-11-29 12:14:15.877799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:10.533 [2024-11-29 12:14:15.878485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145770 ] 00:30:10.533 [2024-11-29 12:14:16.034184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.791 [2024-11-29 12:14:16.130826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.791 [2024-11-29 12:14:16.218823] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:10.791 [2024-11-29 12:14:16.219188] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:10.791 [2024-11-29 12:14:16.219271] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:11.049 [2024-11-29 12:14:16.345315] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:11.049 12:14:16 -- common/autotest_common.sh@653 -- # es=236 00:30:11.049 12:14:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:11.049 12:14:16 -- common/autotest_common.sh@662 -- # es=108 00:30:11.049 12:14:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:11.049 12:14:16 -- common/autotest_common.sh@670 -- # es=1 00:30:11.049 12:14:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:11.049 00:30:11.049 real 0m1.319s 00:30:11.049 user 0m0.722s 00:30:11.049 sys 0m0.391s 00:30:11.049 12:14:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:11.049 12:14:16 -- common/autotest_common.sh@10 -- # set +x 00:30:11.049 ************************************ 00:30:11.049 END TEST dd_flag_directory_forced_aio 00:30:11.049 ************************************ 00:30:11.049 12:14:16 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:30:11.049 12:14:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:11.049 12:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:11.049 12:14:16 -- common/autotest_common.sh@10 -- # set +x 00:30:11.049 ************************************ 00:30:11.049 START TEST dd_flag_nofollow_forced_aio 00:30:11.049 ************************************ 00:30:11.049 12:14:16 -- common/autotest_common.sh@1114 -- # nofollow 00:30:11.049 12:14:16 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:11.049 12:14:16 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:11.049 12:14:16 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:11.049 12:14:16 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:11.049 12:14:16 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:11.049 12:14:16 -- common/autotest_common.sh@650 -- # local es=0 00:30:11.049 12:14:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:11.049 12:14:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.049 12:14:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.049 12:14:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.049 12:14:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.049 12:14:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.049 12:14:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.049 12:14:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.049 12:14:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:11.049 12:14:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:11.308 [2024-11-29 12:14:16.582390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:11.308 [2024-11-29 12:14:16.582942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145806 ] 00:30:11.308 [2024-11-29 12:14:16.731011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.566 [2024-11-29 12:14:16.826879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.566 [2024-11-29 12:14:16.915239] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:11.566 [2024-11-29 12:14:16.915476] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:11.566 [2024-11-29 12:14:16.915559] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:11.566 [2024-11-29 12:14:17.040774] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:11.825 12:14:17 -- common/autotest_common.sh@653 -- # es=216 00:30:11.825 12:14:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:11.825 12:14:17 -- common/autotest_common.sh@662 -- # es=88 00:30:11.825 12:14:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:11.825 12:14:17 -- common/autotest_common.sh@670 -- # es=1 00:30:11.825 12:14:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:11.825 12:14:17 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:11.825 12:14:17 -- common/autotest_common.sh@650 -- # local es=0 00:30:11.825 12:14:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:11.825 12:14:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.825 12:14:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.825 12:14:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.825 12:14:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.825 12:14:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.825 12:14:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.825 12:14:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.825 12:14:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:11.825 12:14:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:11.825 [2024-11-29 12:14:17.222844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:11.825 [2024-11-29 12:14:17.223676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145818 ] 00:30:12.084 [2024-11-29 12:14:17.371164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.084 [2024-11-29 12:14:17.467757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.084 [2024-11-29 12:14:17.556009] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:30:12.084 [2024-11-29 12:14:17.556366] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:30:12.084 [2024-11-29 12:14:17.556449] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:12.342 [2024-11-29 12:14:17.682888] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:12.342 12:14:17 -- common/autotest_common.sh@653 -- # es=216 00:30:12.342 12:14:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:12.342 12:14:17 -- common/autotest_common.sh@662 -- # es=88 00:30:12.342 12:14:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:12.342 12:14:17 -- common/autotest_common.sh@670 -- # es=1 00:30:12.342 12:14:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:12.342 12:14:17 -- dd/posix.sh@46 -- # gen_bytes 512 00:30:12.342 12:14:17 -- dd/common.sh@98 -- # xtrace_disable 00:30:12.342 12:14:17 -- common/autotest_common.sh@10 -- # set +x 00:30:12.342 12:14:17 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:12.603 [2024-11-29 12:14:17.865603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:12.603 [2024-11-29 12:14:17.865863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145830 ] 00:30:12.603 [2024-11-29 12:14:18.014711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.603 [2024-11-29 12:14:18.110681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.886  [2024-11-29T12:14:18.658Z] Copying: 512/512 [B] (average 500 kBps) 00:30:13.147 00:30:13.147 ************************************ 00:30:13.147 END TEST dd_flag_nofollow_forced_aio 00:30:13.147 ************************************ 00:30:13.147 12:14:18 -- dd/posix.sh@49 -- # [[ vtx4elnc8aqk0vzicewcvrx30m659mcv7e6ocab69fbn6b9iknwjlbk1tu3zka8cxlm1d0gtnv2mmrooc0ref2gl2kodd36avgsoll6ibh1vjq4jf3icpeed3m1t10ben86e626hd1dzt5knxrxnjtqb3ong3n7j94zrkhqapofwqg3awpgvzubcc879o2ncm1t6rujb9hx3b3ng5mlkhhm7sl6ldr8e4fke9cb4652mc170jap60s17hvq149yj3751p349er9q03isl4pmw7ikhzihq1qdy0i1j3d39f5wa92hkmq4sltpnyr3bim24me6kcy3b2i8z8mp3ajjyt5i4om8zb8tdlv3fwh0gwjfxaqma86hdkuujg1doz6vz8z0vv9x5p3li0gsqwdl14qd9f8b64zvs50gfh6tzggqc6jn2ml06cgtsf6ifx8cvgjqwtkoddve2lv03kcusr2dz5lqaellmkh6y7m5t66uri6lzyd6mv6kn6hox5za == \v\t\x\4\e\l\n\c\8\a\q\k\0\v\z\i\c\e\w\c\v\r\x\3\0\m\6\5\9\m\c\v\7\e\6\o\c\a\b\6\9\f\b\n\6\b\9\i\k\n\w\j\l\b\k\1\t\u\3\z\k\a\8\c\x\l\m\1\d\0\g\t\n\v\2\m\m\r\o\o\c\0\r\e\f\2\g\l\2\k\o\d\d\3\6\a\v\g\s\o\l\l\6\i\b\h\1\v\j\q\4\j\f\3\i\c\p\e\e\d\3\m\1\t\1\0\b\e\n\8\6\e\6\2\6\h\d\1\d\z\t\5\k\n\x\r\x\n\j\t\q\b\3\o\n\g\3\n\7\j\9\4\z\r\k\h\q\a\p\o\f\w\q\g\3\a\w\p\g\v\z\u\b\c\c\8\7\9\o\2\n\c\m\1\t\6\r\u\j\b\9\h\x\3\b\3\n\g\5\m\l\k\h\h\m\7\s\l\6\l\d\r\8\e\4\f\k\e\9\c\b\4\6\5\2\m\c\1\7\0\j\a\p\6\0\s\1\7\h\v\q\1\4\9\y\j\3\7\5\1\p\3\4\9\e\r\9\q\0\3\i\s\l\4\p\m\w\7\i\k\h\z\i\h\q\1\q\d\y\0\i\1\j\3\d\3\9\f\5\w\a\9\2\h\k\m\q\4\s\l\t\p\n\y\r\3\b\i\m\2\4\m\e\6\k\c\y\3\b\2\i\8\z\8\m\p\3\a\j\j\y\t\5\i\4\o\m\8\z\b\8\t\d\l\v\3\f\w\h\0\g\w\j\f\x\a\q\m\a\8\6\h\d\k\u\u\j\g\1\d\o\z\6\v\z\8\z\0\v\v\9\x\5\p\3\l\i\0\g\s\q\w\d\l\1\4\q\d\9\f\8\b\6\4\z\v\s\5\0\g\f\h\6\t\z\g\g\q\c\6\j\n\2\m\l\0\6\c\g\t\s\f\6\i\f\x\8\c\v\g\j\q\w\t\k\o\d\d\v\e\2\l\v\0\3\k\c\u\s\r\2\d\z\5\l\q\a\e\l\l\m\k\h\6\y\7\m\5\t\6\6\u\r\i\6\l\z\y\d\6\m\v\6\k\n\6\h\o\x\5\z\a ]] 00:30:13.147 00:30:13.147 real 0m1.972s 00:30:13.147 user 0m1.082s 00:30:13.147 sys 0m0.550s 00:30:13.147 12:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:13.147 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:30:13.147 12:14:18 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:30:13.147 12:14:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:13.147 12:14:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.147 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:30:13.147 ************************************ 00:30:13.147 START TEST dd_flag_noatime_forced_aio 00:30:13.147 ************************************ 00:30:13.147 12:14:18 -- common/autotest_common.sh@1114 -- # noatime 00:30:13.147 12:14:18 -- dd/posix.sh@53 -- # local atime_if 00:30:13.147 12:14:18 -- dd/posix.sh@54 -- # local atime_of 00:30:13.147 12:14:18 -- dd/posix.sh@58 -- # gen_bytes 512 00:30:13.147 12:14:18 -- dd/common.sh@98 -- # xtrace_disable 00:30:13.147 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:30:13.147 12:14:18 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:13.147 12:14:18 -- dd/posix.sh@60 -- # atime_if=1732882458 00:30:13.148 12:14:18 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:13.148 12:14:18 -- dd/posix.sh@61 -- # atime_of=1732882458 00:30:13.148 12:14:18 -- dd/posix.sh@66 -- # sleep 1 00:30:14.084 12:14:19 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:14.343 [2024-11-29 12:14:19.611568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:14.343 [2024-11-29 12:14:19.611833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145886 ] 00:30:14.343 [2024-11-29 12:14:19.761781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.601 [2024-11-29 12:14:19.864424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.601  [2024-11-29T12:14:20.371Z] Copying: 512/512 [B] (average 500 kBps) 00:30:14.860 00:30:14.860 12:14:20 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:14.860 12:14:20 -- dd/posix.sh@69 -- # (( atime_if == 1732882458 )) 00:30:14.860 12:14:20 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:14.860 12:14:20 -- dd/posix.sh@70 -- # (( atime_of == 1732882458 )) 00:30:14.860 12:14:20 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:14.860 [2024-11-29 12:14:20.314514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:14.860 [2024-11-29 12:14:20.314785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145894 ] 00:30:15.118 [2024-11-29 12:14:20.463138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.118 [2024-11-29 12:14:20.559769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.377  [2024-11-29T12:14:21.146Z] Copying: 512/512 [B] (average 500 kBps) 00:30:15.635 00:30:15.635 12:14:20 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:15.635 12:14:20 -- dd/posix.sh@73 -- # (( atime_if < 1732882460 )) 00:30:15.635 00:30:15.635 real 0m2.403s 00:30:15.635 user 0m0.741s 00:30:15.635 sys 0m0.387s 00:30:15.635 ************************************ 00:30:15.635 END TEST dd_flag_noatime_forced_aio 00:30:15.635 ************************************ 00:30:15.635 12:14:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:15.635 12:14:20 -- common/autotest_common.sh@10 -- # set +x 00:30:15.635 12:14:20 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:30:15.635 12:14:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:15.635 12:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:15.635 12:14:20 -- common/autotest_common.sh@10 -- # set +x 00:30:15.635 ************************************ 00:30:15.635 START TEST dd_flags_misc_forced_aio 00:30:15.635 ************************************ 00:30:15.635 12:14:20 -- common/autotest_common.sh@1114 -- # io 00:30:15.635 12:14:20 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:30:15.635 12:14:20 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:30:15.635 12:14:20 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:30:15.635 12:14:20 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:15.635 12:14:20 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:15.635 12:14:20 -- dd/common.sh@98 -- # xtrace_disable 00:30:15.635 12:14:20 -- common/autotest_common.sh@10 -- # set +x 00:30:15.635 12:14:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:15.635 12:14:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:15.635 [2024-11-29 12:14:21.053059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:15.635 [2024-11-29 12:14:21.053323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145930 ] 00:30:15.893 [2024-11-29 12:14:21.202518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.893 [2024-11-29 12:14:21.308229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.893  [2024-11-29T12:14:21.971Z] Copying: 512/512 [B] (average 500 kBps) 00:30:16.460 00:30:16.461 12:14:21 -- dd/posix.sh@93 -- # [[ rc0n7ml09klwgilcy7jup0rhc6u3mfbavghi6deq9v4s04divpbxpyslt5ljcfo0lxw6zfjf11zq9jcmifgtvxxuypfvstzmwcwktgsc02mqgo6vh7usq6crxrnsi1jev8smon8s1tcb9a95x7rm4guzntxlybu95mo4lw1gq456rev4jtkqcl2pkk0gfnxhpt43y6gh705qrt7gertx18xbjyxt68mbi01e8rcmymylta4fv87u0rr8l2mz830oap88p1ft1ldtl1rj2qr1k9cby2wk4hshz00ppiiaeu2gnyatesw36sw1x1p6csf4z0pwwmt6el0k7ri7dlb272fx4ken4os3bhqo94pitlzn3rhwukslhzeem5ysoip1hkr37phux4monar2vhnmz1ycl6noivgrt2wugevhv7toeqwbwfrnlpax11drgdkxrq52n7sxg8yrgd19b9qh0uds3aetfs3whj4ltwak1ys4ubds80bmo75t262f9yad == \r\c\0\n\7\m\l\0\9\k\l\w\g\i\l\c\y\7\j\u\p\0\r\h\c\6\u\3\m\f\b\a\v\g\h\i\6\d\e\q\9\v\4\s\0\4\d\i\v\p\b\x\p\y\s\l\t\5\l\j\c\f\o\0\l\x\w\6\z\f\j\f\1\1\z\q\9\j\c\m\i\f\g\t\v\x\x\u\y\p\f\v\s\t\z\m\w\c\w\k\t\g\s\c\0\2\m\q\g\o\6\v\h\7\u\s\q\6\c\r\x\r\n\s\i\1\j\e\v\8\s\m\o\n\8\s\1\t\c\b\9\a\9\5\x\7\r\m\4\g\u\z\n\t\x\l\y\b\u\9\5\m\o\4\l\w\1\g\q\4\5\6\r\e\v\4\j\t\k\q\c\l\2\p\k\k\0\g\f\n\x\h\p\t\4\3\y\6\g\h\7\0\5\q\r\t\7\g\e\r\t\x\1\8\x\b\j\y\x\t\6\8\m\b\i\0\1\e\8\r\c\m\y\m\y\l\t\a\4\f\v\8\7\u\0\r\r\8\l\2\m\z\8\3\0\o\a\p\8\8\p\1\f\t\1\l\d\t\l\1\r\j\2\q\r\1\k\9\c\b\y\2\w\k\4\h\s\h\z\0\0\p\p\i\i\a\e\u\2\g\n\y\a\t\e\s\w\3\6\s\w\1\x\1\p\6\c\s\f\4\z\0\p\w\w\m\t\6\e\l\0\k\7\r\i\7\d\l\b\2\7\2\f\x\4\k\e\n\4\o\s\3\b\h\q\o\9\4\p\i\t\l\z\n\3\r\h\w\u\k\s\l\h\z\e\e\m\5\y\s\o\i\p\1\h\k\r\3\7\p\h\u\x\4\m\o\n\a\r\2\v\h\n\m\z\1\y\c\l\6\n\o\i\v\g\r\t\2\w\u\g\e\v\h\v\7\t\o\e\q\w\b\w\f\r\n\l\p\a\x\1\1\d\r\g\d\k\x\r\q\5\2\n\7\s\x\g\8\y\r\g\d\1\9\b\9\q\h\0\u\d\s\3\a\e\t\f\s\3\w\h\j\4\l\t\w\a\k\1\y\s\4\u\b\d\s\8\0\b\m\o\7\5\t\2\6\2\f\9\y\a\d ]] 00:30:16.461 12:14:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:16.461 12:14:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:30:16.461 [2024-11-29 12:14:21.750467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:16.461 [2024-11-29 12:14:21.750679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145944 ] 00:30:16.461 [2024-11-29 12:14:21.894152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.720 [2024-11-29 12:14:21.991072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.720  [2024-11-29T12:14:22.489Z] Copying: 512/512 [B] (average 500 kBps) 00:30:16.978 00:30:16.979 12:14:22 -- dd/posix.sh@93 -- # [[ rc0n7ml09klwgilcy7jup0rhc6u3mfbavghi6deq9v4s04divpbxpyslt5ljcfo0lxw6zfjf11zq9jcmifgtvxxuypfvstzmwcwktgsc02mqgo6vh7usq6crxrnsi1jev8smon8s1tcb9a95x7rm4guzntxlybu95mo4lw1gq456rev4jtkqcl2pkk0gfnxhpt43y6gh705qrt7gertx18xbjyxt68mbi01e8rcmymylta4fv87u0rr8l2mz830oap88p1ft1ldtl1rj2qr1k9cby2wk4hshz00ppiiaeu2gnyatesw36sw1x1p6csf4z0pwwmt6el0k7ri7dlb272fx4ken4os3bhqo94pitlzn3rhwukslhzeem5ysoip1hkr37phux4monar2vhnmz1ycl6noivgrt2wugevhv7toeqwbwfrnlpax11drgdkxrq52n7sxg8yrgd19b9qh0uds3aetfs3whj4ltwak1ys4ubds80bmo75t262f9yad == \r\c\0\n\7\m\l\0\9\k\l\w\g\i\l\c\y\7\j\u\p\0\r\h\c\6\u\3\m\f\b\a\v\g\h\i\6\d\e\q\9\v\4\s\0\4\d\i\v\p\b\x\p\y\s\l\t\5\l\j\c\f\o\0\l\x\w\6\z\f\j\f\1\1\z\q\9\j\c\m\i\f\g\t\v\x\x\u\y\p\f\v\s\t\z\m\w\c\w\k\t\g\s\c\0\2\m\q\g\o\6\v\h\7\u\s\q\6\c\r\x\r\n\s\i\1\j\e\v\8\s\m\o\n\8\s\1\t\c\b\9\a\9\5\x\7\r\m\4\g\u\z\n\t\x\l\y\b\u\9\5\m\o\4\l\w\1\g\q\4\5\6\r\e\v\4\j\t\k\q\c\l\2\p\k\k\0\g\f\n\x\h\p\t\4\3\y\6\g\h\7\0\5\q\r\t\7\g\e\r\t\x\1\8\x\b\j\y\x\t\6\8\m\b\i\0\1\e\8\r\c\m\y\m\y\l\t\a\4\f\v\8\7\u\0\r\r\8\l\2\m\z\8\3\0\o\a\p\8\8\p\1\f\t\1\l\d\t\l\1\r\j\2\q\r\1\k\9\c\b\y\2\w\k\4\h\s\h\z\0\0\p\p\i\i\a\e\u\2\g\n\y\a\t\e\s\w\3\6\s\w\1\x\1\p\6\c\s\f\4\z\0\p\w\w\m\t\6\e\l\0\k\7\r\i\7\d\l\b\2\7\2\f\x\4\k\e\n\4\o\s\3\b\h\q\o\9\4\p\i\t\l\z\n\3\r\h\w\u\k\s\l\h\z\e\e\m\5\y\s\o\i\p\1\h\k\r\3\7\p\h\u\x\4\m\o\n\a\r\2\v\h\n\m\z\1\y\c\l\6\n\o\i\v\g\r\t\2\w\u\g\e\v\h\v\7\t\o\e\q\w\b\w\f\r\n\l\p\a\x\1\1\d\r\g\d\k\x\r\q\5\2\n\7\s\x\g\8\y\r\g\d\1\9\b\9\q\h\0\u\d\s\3\a\e\t\f\s\3\w\h\j\4\l\t\w\a\k\1\y\s\4\u\b\d\s\8\0\b\m\o\7\5\t\2\6\2\f\9\y\a\d ]] 00:30:16.979 12:14:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:16.979 12:14:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:30:16.979 [2024-11-29 12:14:22.424376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:16.979 [2024-11-29 12:14:22.424701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145956 ] 00:30:17.237 [2024-11-29 12:14:22.579532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.237 [2024-11-29 12:14:22.678028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.495  [2024-11-29T12:14:23.265Z] Copying: 512/512 [B] (average 100 kBps) 00:30:17.754 00:30:17.754 12:14:23 -- dd/posix.sh@93 -- # [[ rc0n7ml09klwgilcy7jup0rhc6u3mfbavghi6deq9v4s04divpbxpyslt5ljcfo0lxw6zfjf11zq9jcmifgtvxxuypfvstzmwcwktgsc02mqgo6vh7usq6crxrnsi1jev8smon8s1tcb9a95x7rm4guzntxlybu95mo4lw1gq456rev4jtkqcl2pkk0gfnxhpt43y6gh705qrt7gertx18xbjyxt68mbi01e8rcmymylta4fv87u0rr8l2mz830oap88p1ft1ldtl1rj2qr1k9cby2wk4hshz00ppiiaeu2gnyatesw36sw1x1p6csf4z0pwwmt6el0k7ri7dlb272fx4ken4os3bhqo94pitlzn3rhwukslhzeem5ysoip1hkr37phux4monar2vhnmz1ycl6noivgrt2wugevhv7toeqwbwfrnlpax11drgdkxrq52n7sxg8yrgd19b9qh0uds3aetfs3whj4ltwak1ys4ubds80bmo75t262f9yad == \r\c\0\n\7\m\l\0\9\k\l\w\g\i\l\c\y\7\j\u\p\0\r\h\c\6\u\3\m\f\b\a\v\g\h\i\6\d\e\q\9\v\4\s\0\4\d\i\v\p\b\x\p\y\s\l\t\5\l\j\c\f\o\0\l\x\w\6\z\f\j\f\1\1\z\q\9\j\c\m\i\f\g\t\v\x\x\u\y\p\f\v\s\t\z\m\w\c\w\k\t\g\s\c\0\2\m\q\g\o\6\v\h\7\u\s\q\6\c\r\x\r\n\s\i\1\j\e\v\8\s\m\o\n\8\s\1\t\c\b\9\a\9\5\x\7\r\m\4\g\u\z\n\t\x\l\y\b\u\9\5\m\o\4\l\w\1\g\q\4\5\6\r\e\v\4\j\t\k\q\c\l\2\p\k\k\0\g\f\n\x\h\p\t\4\3\y\6\g\h\7\0\5\q\r\t\7\g\e\r\t\x\1\8\x\b\j\y\x\t\6\8\m\b\i\0\1\e\8\r\c\m\y\m\y\l\t\a\4\f\v\8\7\u\0\r\r\8\l\2\m\z\8\3\0\o\a\p\8\8\p\1\f\t\1\l\d\t\l\1\r\j\2\q\r\1\k\9\c\b\y\2\w\k\4\h\s\h\z\0\0\p\p\i\i\a\e\u\2\g\n\y\a\t\e\s\w\3\6\s\w\1\x\1\p\6\c\s\f\4\z\0\p\w\w\m\t\6\e\l\0\k\7\r\i\7\d\l\b\2\7\2\f\x\4\k\e\n\4\o\s\3\b\h\q\o\9\4\p\i\t\l\z\n\3\r\h\w\u\k\s\l\h\z\e\e\m\5\y\s\o\i\p\1\h\k\r\3\7\p\h\u\x\4\m\o\n\a\r\2\v\h\n\m\z\1\y\c\l\6\n\o\i\v\g\r\t\2\w\u\g\e\v\h\v\7\t\o\e\q\w\b\w\f\r\n\l\p\a\x\1\1\d\r\g\d\k\x\r\q\5\2\n\7\s\x\g\8\y\r\g\d\1\9\b\9\q\h\0\u\d\s\3\a\e\t\f\s\3\w\h\j\4\l\t\w\a\k\1\y\s\4\u\b\d\s\8\0\b\m\o\7\5\t\2\6\2\f\9\y\a\d ]] 00:30:17.754 12:14:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:17.754 12:14:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:30:17.754 [2024-11-29 12:14:23.114577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:17.754 [2024-11-29 12:14:23.114826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145968 ] 00:30:17.754 [2024-11-29 12:14:23.263142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.013 [2024-11-29 12:14:23.360355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.013  [2024-11-29T12:14:23.782Z] Copying: 512/512 [B] (average 250 kBps) 00:30:18.271 00:30:18.271 12:14:23 -- dd/posix.sh@93 -- # [[ rc0n7ml09klwgilcy7jup0rhc6u3mfbavghi6deq9v4s04divpbxpyslt5ljcfo0lxw6zfjf11zq9jcmifgtvxxuypfvstzmwcwktgsc02mqgo6vh7usq6crxrnsi1jev8smon8s1tcb9a95x7rm4guzntxlybu95mo4lw1gq456rev4jtkqcl2pkk0gfnxhpt43y6gh705qrt7gertx18xbjyxt68mbi01e8rcmymylta4fv87u0rr8l2mz830oap88p1ft1ldtl1rj2qr1k9cby2wk4hshz00ppiiaeu2gnyatesw36sw1x1p6csf4z0pwwmt6el0k7ri7dlb272fx4ken4os3bhqo94pitlzn3rhwukslhzeem5ysoip1hkr37phux4monar2vhnmz1ycl6noivgrt2wugevhv7toeqwbwfrnlpax11drgdkxrq52n7sxg8yrgd19b9qh0uds3aetfs3whj4ltwak1ys4ubds80bmo75t262f9yad == \r\c\0\n\7\m\l\0\9\k\l\w\g\i\l\c\y\7\j\u\p\0\r\h\c\6\u\3\m\f\b\a\v\g\h\i\6\d\e\q\9\v\4\s\0\4\d\i\v\p\b\x\p\y\s\l\t\5\l\j\c\f\o\0\l\x\w\6\z\f\j\f\1\1\z\q\9\j\c\m\i\f\g\t\v\x\x\u\y\p\f\v\s\t\z\m\w\c\w\k\t\g\s\c\0\2\m\q\g\o\6\v\h\7\u\s\q\6\c\r\x\r\n\s\i\1\j\e\v\8\s\m\o\n\8\s\1\t\c\b\9\a\9\5\x\7\r\m\4\g\u\z\n\t\x\l\y\b\u\9\5\m\o\4\l\w\1\g\q\4\5\6\r\e\v\4\j\t\k\q\c\l\2\p\k\k\0\g\f\n\x\h\p\t\4\3\y\6\g\h\7\0\5\q\r\t\7\g\e\r\t\x\1\8\x\b\j\y\x\t\6\8\m\b\i\0\1\e\8\r\c\m\y\m\y\l\t\a\4\f\v\8\7\u\0\r\r\8\l\2\m\z\8\3\0\o\a\p\8\8\p\1\f\t\1\l\d\t\l\1\r\j\2\q\r\1\k\9\c\b\y\2\w\k\4\h\s\h\z\0\0\p\p\i\i\a\e\u\2\g\n\y\a\t\e\s\w\3\6\s\w\1\x\1\p\6\c\s\f\4\z\0\p\w\w\m\t\6\e\l\0\k\7\r\i\7\d\l\b\2\7\2\f\x\4\k\e\n\4\o\s\3\b\h\q\o\9\4\p\i\t\l\z\n\3\r\h\w\u\k\s\l\h\z\e\e\m\5\y\s\o\i\p\1\h\k\r\3\7\p\h\u\x\4\m\o\n\a\r\2\v\h\n\m\z\1\y\c\l\6\n\o\i\v\g\r\t\2\w\u\g\e\v\h\v\7\t\o\e\q\w\b\w\f\r\n\l\p\a\x\1\1\d\r\g\d\k\x\r\q\5\2\n\7\s\x\g\8\y\r\g\d\1\9\b\9\q\h\0\u\d\s\3\a\e\t\f\s\3\w\h\j\4\l\t\w\a\k\1\y\s\4\u\b\d\s\8\0\b\m\o\7\5\t\2\6\2\f\9\y\a\d ]] 00:30:18.271 12:14:23 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:18.271 12:14:23 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:18.271 12:14:23 -- dd/common.sh@98 -- # xtrace_disable 00:30:18.271 12:14:23 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 12:14:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:18.271 12:14:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:18.530 [2024-11-29 12:14:23.798439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:18.530 [2024-11-29 12:14:23.798653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145985 ] 00:30:18.530 [2024-11-29 12:14:23.946688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.788 [2024-11-29 12:14:24.045747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.788  [2024-11-29T12:14:24.598Z] Copying: 512/512 [B] (average 500 kBps) 00:30:19.087 00:30:19.087 12:14:24 -- dd/posix.sh@93 -- # [[ beb5i5ip08bsxtjus5pat6dy130y2gts5gxzgvgdtxvn52g2xrxbo5m1djln4sqy153c4m56xpf0dg27xkg55x00ozuupce4lobwcbuhsslh8cqgghlg5xxk1o8dc37a1czc59ae2oxh8cvjl4zljyz4fiu8zz46u4qy90d9ddhxsz2j6wshepn76ab08mdsj9wdc3j661gbcfd3s1k3in37cz7pd9ayerukcqw2lat7lrxvi8muugw94xt829lg0wtds91k9s4tn0or8b1x290xgsmxcahvah25ev6fdp62mqqeax1gysskuc8vjrt3rg1mveie92vyue8kkrnh6y6hpl5upz53xz0uc4xhxiy3of64vfwouysczjgl4zkpr1xaj224yw097nmwwieyzn34twze0x878elgbxumnt9wqv57biu13s7768trvrmtwrofgrcenm96ele7u5ql3hu5z3w0zzn27qm98up87803zc4yfx69ochi55dvhpb0 == \b\e\b\5\i\5\i\p\0\8\b\s\x\t\j\u\s\5\p\a\t\6\d\y\1\3\0\y\2\g\t\s\5\g\x\z\g\v\g\d\t\x\v\n\5\2\g\2\x\r\x\b\o\5\m\1\d\j\l\n\4\s\q\y\1\5\3\c\4\m\5\6\x\p\f\0\d\g\2\7\x\k\g\5\5\x\0\0\o\z\u\u\p\c\e\4\l\o\b\w\c\b\u\h\s\s\l\h\8\c\q\g\g\h\l\g\5\x\x\k\1\o\8\d\c\3\7\a\1\c\z\c\5\9\a\e\2\o\x\h\8\c\v\j\l\4\z\l\j\y\z\4\f\i\u\8\z\z\4\6\u\4\q\y\9\0\d\9\d\d\h\x\s\z\2\j\6\w\s\h\e\p\n\7\6\a\b\0\8\m\d\s\j\9\w\d\c\3\j\6\6\1\g\b\c\f\d\3\s\1\k\3\i\n\3\7\c\z\7\p\d\9\a\y\e\r\u\k\c\q\w\2\l\a\t\7\l\r\x\v\i\8\m\u\u\g\w\9\4\x\t\8\2\9\l\g\0\w\t\d\s\9\1\k\9\s\4\t\n\0\o\r\8\b\1\x\2\9\0\x\g\s\m\x\c\a\h\v\a\h\2\5\e\v\6\f\d\p\6\2\m\q\q\e\a\x\1\g\y\s\s\k\u\c\8\v\j\r\t\3\r\g\1\m\v\e\i\e\9\2\v\y\u\e\8\k\k\r\n\h\6\y\6\h\p\l\5\u\p\z\5\3\x\z\0\u\c\4\x\h\x\i\y\3\o\f\6\4\v\f\w\o\u\y\s\c\z\j\g\l\4\z\k\p\r\1\x\a\j\2\2\4\y\w\0\9\7\n\m\w\w\i\e\y\z\n\3\4\t\w\z\e\0\x\8\7\8\e\l\g\b\x\u\m\n\t\9\w\q\v\5\7\b\i\u\1\3\s\7\7\6\8\t\r\v\r\m\t\w\r\o\f\g\r\c\e\n\m\9\6\e\l\e\7\u\5\q\l\3\h\u\5\z\3\w\0\z\z\n\2\7\q\m\9\8\u\p\8\7\8\0\3\z\c\4\y\f\x\6\9\o\c\h\i\5\5\d\v\h\p\b\0 ]] 00:30:19.087 12:14:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:19.087 12:14:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:30:19.087 [2024-11-29 12:14:24.496959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:19.087 [2024-11-29 12:14:24.497273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145990 ] 00:30:19.346 [2024-11-29 12:14:24.645583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.346 [2024-11-29 12:14:24.744354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.346  [2024-11-29T12:14:25.422Z] Copying: 512/512 [B] (average 500 kBps) 00:30:19.911 00:30:19.911 12:14:25 -- dd/posix.sh@93 -- # [[ beb5i5ip08bsxtjus5pat6dy130y2gts5gxzgvgdtxvn52g2xrxbo5m1djln4sqy153c4m56xpf0dg27xkg55x00ozuupce4lobwcbuhsslh8cqgghlg5xxk1o8dc37a1czc59ae2oxh8cvjl4zljyz4fiu8zz46u4qy90d9ddhxsz2j6wshepn76ab08mdsj9wdc3j661gbcfd3s1k3in37cz7pd9ayerukcqw2lat7lrxvi8muugw94xt829lg0wtds91k9s4tn0or8b1x290xgsmxcahvah25ev6fdp62mqqeax1gysskuc8vjrt3rg1mveie92vyue8kkrnh6y6hpl5upz53xz0uc4xhxiy3of64vfwouysczjgl4zkpr1xaj224yw097nmwwieyzn34twze0x878elgbxumnt9wqv57biu13s7768trvrmtwrofgrcenm96ele7u5ql3hu5z3w0zzn27qm98up87803zc4yfx69ochi55dvhpb0 == \b\e\b\5\i\5\i\p\0\8\b\s\x\t\j\u\s\5\p\a\t\6\d\y\1\3\0\y\2\g\t\s\5\g\x\z\g\v\g\d\t\x\v\n\5\2\g\2\x\r\x\b\o\5\m\1\d\j\l\n\4\s\q\y\1\5\3\c\4\m\5\6\x\p\f\0\d\g\2\7\x\k\g\5\5\x\0\0\o\z\u\u\p\c\e\4\l\o\b\w\c\b\u\h\s\s\l\h\8\c\q\g\g\h\l\g\5\x\x\k\1\o\8\d\c\3\7\a\1\c\z\c\5\9\a\e\2\o\x\h\8\c\v\j\l\4\z\l\j\y\z\4\f\i\u\8\z\z\4\6\u\4\q\y\9\0\d\9\d\d\h\x\s\z\2\j\6\w\s\h\e\p\n\7\6\a\b\0\8\m\d\s\j\9\w\d\c\3\j\6\6\1\g\b\c\f\d\3\s\1\k\3\i\n\3\7\c\z\7\p\d\9\a\y\e\r\u\k\c\q\w\2\l\a\t\7\l\r\x\v\i\8\m\u\u\g\w\9\4\x\t\8\2\9\l\g\0\w\t\d\s\9\1\k\9\s\4\t\n\0\o\r\8\b\1\x\2\9\0\x\g\s\m\x\c\a\h\v\a\h\2\5\e\v\6\f\d\p\6\2\m\q\q\e\a\x\1\g\y\s\s\k\u\c\8\v\j\r\t\3\r\g\1\m\v\e\i\e\9\2\v\y\u\e\8\k\k\r\n\h\6\y\6\h\p\l\5\u\p\z\5\3\x\z\0\u\c\4\x\h\x\i\y\3\o\f\6\4\v\f\w\o\u\y\s\c\z\j\g\l\4\z\k\p\r\1\x\a\j\2\2\4\y\w\0\9\7\n\m\w\w\i\e\y\z\n\3\4\t\w\z\e\0\x\8\7\8\e\l\g\b\x\u\m\n\t\9\w\q\v\5\7\b\i\u\1\3\s\7\7\6\8\t\r\v\r\m\t\w\r\o\f\g\r\c\e\n\m\9\6\e\l\e\7\u\5\q\l\3\h\u\5\z\3\w\0\z\z\n\2\7\q\m\9\8\u\p\8\7\8\0\3\z\c\4\y\f\x\6\9\o\c\h\i\5\5\d\v\h\p\b\0 ]] 00:30:19.911 12:14:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:19.911 12:14:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:30:19.911 [2024-11-29 12:14:25.199937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:19.911 [2024-11-29 12:14:25.200191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146007 ] 00:30:19.911 [2024-11-29 12:14:25.347006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.169 [2024-11-29 12:14:25.445762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.169  [2024-11-29T12:14:25.938Z] Copying: 512/512 [B] (average 250 kBps) 00:30:20.427 00:30:20.427 12:14:25 -- dd/posix.sh@93 -- # [[ beb5i5ip08bsxtjus5pat6dy130y2gts5gxzgvgdtxvn52g2xrxbo5m1djln4sqy153c4m56xpf0dg27xkg55x00ozuupce4lobwcbuhsslh8cqgghlg5xxk1o8dc37a1czc59ae2oxh8cvjl4zljyz4fiu8zz46u4qy90d9ddhxsz2j6wshepn76ab08mdsj9wdc3j661gbcfd3s1k3in37cz7pd9ayerukcqw2lat7lrxvi8muugw94xt829lg0wtds91k9s4tn0or8b1x290xgsmxcahvah25ev6fdp62mqqeax1gysskuc8vjrt3rg1mveie92vyue8kkrnh6y6hpl5upz53xz0uc4xhxiy3of64vfwouysczjgl4zkpr1xaj224yw097nmwwieyzn34twze0x878elgbxumnt9wqv57biu13s7768trvrmtwrofgrcenm96ele7u5ql3hu5z3w0zzn27qm98up87803zc4yfx69ochi55dvhpb0 == \b\e\b\5\i\5\i\p\0\8\b\s\x\t\j\u\s\5\p\a\t\6\d\y\1\3\0\y\2\g\t\s\5\g\x\z\g\v\g\d\t\x\v\n\5\2\g\2\x\r\x\b\o\5\m\1\d\j\l\n\4\s\q\y\1\5\3\c\4\m\5\6\x\p\f\0\d\g\2\7\x\k\g\5\5\x\0\0\o\z\u\u\p\c\e\4\l\o\b\w\c\b\u\h\s\s\l\h\8\c\q\g\g\h\l\g\5\x\x\k\1\o\8\d\c\3\7\a\1\c\z\c\5\9\a\e\2\o\x\h\8\c\v\j\l\4\z\l\j\y\z\4\f\i\u\8\z\z\4\6\u\4\q\y\9\0\d\9\d\d\h\x\s\z\2\j\6\w\s\h\e\p\n\7\6\a\b\0\8\m\d\s\j\9\w\d\c\3\j\6\6\1\g\b\c\f\d\3\s\1\k\3\i\n\3\7\c\z\7\p\d\9\a\y\e\r\u\k\c\q\w\2\l\a\t\7\l\r\x\v\i\8\m\u\u\g\w\9\4\x\t\8\2\9\l\g\0\w\t\d\s\9\1\k\9\s\4\t\n\0\o\r\8\b\1\x\2\9\0\x\g\s\m\x\c\a\h\v\a\h\2\5\e\v\6\f\d\p\6\2\m\q\q\e\a\x\1\g\y\s\s\k\u\c\8\v\j\r\t\3\r\g\1\m\v\e\i\e\9\2\v\y\u\e\8\k\k\r\n\h\6\y\6\h\p\l\5\u\p\z\5\3\x\z\0\u\c\4\x\h\x\i\y\3\o\f\6\4\v\f\w\o\u\y\s\c\z\j\g\l\4\z\k\p\r\1\x\a\j\2\2\4\y\w\0\9\7\n\m\w\w\i\e\y\z\n\3\4\t\w\z\e\0\x\8\7\8\e\l\g\b\x\u\m\n\t\9\w\q\v\5\7\b\i\u\1\3\s\7\7\6\8\t\r\v\r\m\t\w\r\o\f\g\r\c\e\n\m\9\6\e\l\e\7\u\5\q\l\3\h\u\5\z\3\w\0\z\z\n\2\7\q\m\9\8\u\p\8\7\8\0\3\z\c\4\y\f\x\6\9\o\c\h\i\5\5\d\v\h\p\b\0 ]] 00:30:20.427 12:14:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:20.427 12:14:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:30:20.427 [2024-11-29 12:14:25.903960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:20.427 [2024-11-29 12:14:25.904226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146023 ] 00:30:20.686 [2024-11-29 12:14:26.053687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.686 [2024-11-29 12:14:26.155702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.944  [2024-11-29T12:14:26.713Z] Copying: 512/512 [B] (average 250 kBps) 00:30:21.202 00:30:21.202 12:14:26 -- dd/posix.sh@93 -- # [[ beb5i5ip08bsxtjus5pat6dy130y2gts5gxzgvgdtxvn52g2xrxbo5m1djln4sqy153c4m56xpf0dg27xkg55x00ozuupce4lobwcbuhsslh8cqgghlg5xxk1o8dc37a1czc59ae2oxh8cvjl4zljyz4fiu8zz46u4qy90d9ddhxsz2j6wshepn76ab08mdsj9wdc3j661gbcfd3s1k3in37cz7pd9ayerukcqw2lat7lrxvi8muugw94xt829lg0wtds91k9s4tn0or8b1x290xgsmxcahvah25ev6fdp62mqqeax1gysskuc8vjrt3rg1mveie92vyue8kkrnh6y6hpl5upz53xz0uc4xhxiy3of64vfwouysczjgl4zkpr1xaj224yw097nmwwieyzn34twze0x878elgbxumnt9wqv57biu13s7768trvrmtwrofgrcenm96ele7u5ql3hu5z3w0zzn27qm98up87803zc4yfx69ochi55dvhpb0 == \b\e\b\5\i\5\i\p\0\8\b\s\x\t\j\u\s\5\p\a\t\6\d\y\1\3\0\y\2\g\t\s\5\g\x\z\g\v\g\d\t\x\v\n\5\2\g\2\x\r\x\b\o\5\m\1\d\j\l\n\4\s\q\y\1\5\3\c\4\m\5\6\x\p\f\0\d\g\2\7\x\k\g\5\5\x\0\0\o\z\u\u\p\c\e\4\l\o\b\w\c\b\u\h\s\s\l\h\8\c\q\g\g\h\l\g\5\x\x\k\1\o\8\d\c\3\7\a\1\c\z\c\5\9\a\e\2\o\x\h\8\c\v\j\l\4\z\l\j\y\z\4\f\i\u\8\z\z\4\6\u\4\q\y\9\0\d\9\d\d\h\x\s\z\2\j\6\w\s\h\e\p\n\7\6\a\b\0\8\m\d\s\j\9\w\d\c\3\j\6\6\1\g\b\c\f\d\3\s\1\k\3\i\n\3\7\c\z\7\p\d\9\a\y\e\r\u\k\c\q\w\2\l\a\t\7\l\r\x\v\i\8\m\u\u\g\w\9\4\x\t\8\2\9\l\g\0\w\t\d\s\9\1\k\9\s\4\t\n\0\o\r\8\b\1\x\2\9\0\x\g\s\m\x\c\a\h\v\a\h\2\5\e\v\6\f\d\p\6\2\m\q\q\e\a\x\1\g\y\s\s\k\u\c\8\v\j\r\t\3\r\g\1\m\v\e\i\e\9\2\v\y\u\e\8\k\k\r\n\h\6\y\6\h\p\l\5\u\p\z\5\3\x\z\0\u\c\4\x\h\x\i\y\3\o\f\6\4\v\f\w\o\u\y\s\c\z\j\g\l\4\z\k\p\r\1\x\a\j\2\2\4\y\w\0\9\7\n\m\w\w\i\e\y\z\n\3\4\t\w\z\e\0\x\8\7\8\e\l\g\b\x\u\m\n\t\9\w\q\v\5\7\b\i\u\1\3\s\7\7\6\8\t\r\v\r\m\t\w\r\o\f\g\r\c\e\n\m\9\6\e\l\e\7\u\5\q\l\3\h\u\5\z\3\w\0\z\z\n\2\7\q\m\9\8\u\p\8\7\8\0\3\z\c\4\y\f\x\6\9\o\c\h\i\5\5\d\v\h\p\b\0 ]] 00:30:21.202 00:30:21.202 real 0m5.550s 00:30:21.202 user 0m2.955s 00:30:21.202 sys 0m1.509s 00:30:21.202 12:14:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:21.202 ************************************ 00:30:21.202 END TEST dd_flags_misc_forced_aio 00:30:21.202 ************************************ 00:30:21.202 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.202 12:14:26 -- dd/posix.sh@1 -- # cleanup 00:30:21.202 12:14:26 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:21.202 12:14:26 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:21.202 00:30:21.202 real 0m24.647s 00:30:21.202 user 0m12.125s 00:30:21.202 sys 0m6.366s 00:30:21.202 12:14:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:21.202 ************************************ 00:30:21.202 END TEST spdk_dd_posix 00:30:21.202 ************************************ 00:30:21.202 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.202 12:14:26 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:30:21.202 12:14:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:21.202 12:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:21.202 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.202 ************************************ 00:30:21.202 START TEST spdk_dd_malloc 00:30:21.202 ************************************ 00:30:21.202 12:14:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:30:21.202 * Looking for test storage... 00:30:21.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:21.202 12:14:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:21.202 12:14:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:21.202 12:14:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:21.461 12:14:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:21.461 12:14:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:21.461 12:14:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:21.461 12:14:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:21.461 12:14:26 -- scripts/common.sh@335 -- # IFS=.-: 00:30:21.461 12:14:26 -- scripts/common.sh@335 -- # read -ra ver1 00:30:21.461 12:14:26 -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.461 12:14:26 -- scripts/common.sh@336 -- # read -ra ver2 00:30:21.461 12:14:26 -- scripts/common.sh@337 -- # local 'op=<' 00:30:21.461 12:14:26 -- scripts/common.sh@339 -- # ver1_l=2 00:30:21.461 12:14:26 -- scripts/common.sh@340 -- # ver2_l=1 00:30:21.461 12:14:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:21.461 12:14:26 -- scripts/common.sh@343 -- # case "$op" in 00:30:21.461 12:14:26 -- scripts/common.sh@344 -- # : 1 00:30:21.461 12:14:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:21.461 12:14:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.461 12:14:26 -- scripts/common.sh@364 -- # decimal 1 00:30:21.461 12:14:26 -- scripts/common.sh@352 -- # local d=1 00:30:21.461 12:14:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.461 12:14:26 -- scripts/common.sh@354 -- # echo 1 00:30:21.461 12:14:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:21.461 12:14:26 -- scripts/common.sh@365 -- # decimal 2 00:30:21.461 12:14:26 -- scripts/common.sh@352 -- # local d=2 00:30:21.461 12:14:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.461 12:14:26 -- scripts/common.sh@354 -- # echo 2 00:30:21.461 12:14:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:21.461 12:14:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:21.461 12:14:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:21.461 12:14:26 -- scripts/common.sh@367 -- # return 0 00:30:21.461 12:14:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.461 12:14:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:21.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.461 --rc genhtml_branch_coverage=1 00:30:21.461 --rc genhtml_function_coverage=1 00:30:21.461 --rc genhtml_legend=1 00:30:21.461 --rc geninfo_all_blocks=1 00:30:21.461 --rc geninfo_unexecuted_blocks=1 00:30:21.461 00:30:21.461 ' 00:30:21.461 12:14:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:21.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.461 --rc genhtml_branch_coverage=1 00:30:21.461 --rc genhtml_function_coverage=1 00:30:21.461 --rc genhtml_legend=1 00:30:21.461 --rc geninfo_all_blocks=1 00:30:21.461 --rc geninfo_unexecuted_blocks=1 00:30:21.461 00:30:21.461 ' 00:30:21.461 12:14:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:21.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.461 --rc genhtml_branch_coverage=1 00:30:21.461 --rc genhtml_function_coverage=1 00:30:21.461 --rc genhtml_legend=1 00:30:21.461 --rc geninfo_all_blocks=1 00:30:21.461 --rc geninfo_unexecuted_blocks=1 00:30:21.461 00:30:21.461 ' 00:30:21.461 12:14:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:21.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.461 --rc genhtml_branch_coverage=1 00:30:21.461 --rc genhtml_function_coverage=1 00:30:21.461 --rc genhtml_legend=1 00:30:21.461 --rc geninfo_all_blocks=1 00:30:21.461 --rc geninfo_unexecuted_blocks=1 00:30:21.461 00:30:21.461 ' 00:30:21.461 12:14:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:21.461 12:14:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.461 12:14:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.461 12:14:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.462 12:14:26 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:21.462 12:14:26 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:21.462 12:14:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:21.462 12:14:26 -- paths/export.sh@5 -- # export PATH 00:30:21.462 12:14:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:21.462 12:14:26 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:30:21.462 12:14:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:21.462 12:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:21.462 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.462 ************************************ 00:30:21.462 START TEST dd_malloc_copy 00:30:21.462 ************************************ 00:30:21.462 12:14:26 -- common/autotest_common.sh@1114 -- # malloc_copy 00:30:21.462 12:14:26 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:30:21.462 12:14:26 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:30:21.462 12:14:26 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:30:21.462 12:14:26 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:30:21.462 12:14:26 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:30:21.462 12:14:26 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:30:21.462 12:14:26 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:30:21.462 12:14:26 -- dd/malloc.sh@28 -- # gen_conf 00:30:21.462 12:14:26 -- dd/common.sh@31 -- # xtrace_disable 00:30:21.462 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.462 [2024-11-29 12:14:26.864131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:21.462 [2024-11-29 12:14:26.864458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146111 ] 00:30:21.462 { 00:30:21.462 "subsystems": [ 00:30:21.462 { 00:30:21.462 "subsystem": "bdev", 00:30:21.462 "config": [ 00:30:21.462 { 00:30:21.462 "params": { 00:30:21.462 "block_size": 512, 00:30:21.462 "num_blocks": 1048576, 00:30:21.462 "name": "malloc0" 00:30:21.462 }, 00:30:21.462 "method": "bdev_malloc_create" 00:30:21.462 }, 00:30:21.462 { 00:30:21.462 "params": { 00:30:21.462 "block_size": 512, 00:30:21.462 "num_blocks": 1048576, 00:30:21.462 "name": "malloc1" 00:30:21.462 }, 00:30:21.462 "method": "bdev_malloc_create" 00:30:21.462 }, 00:30:21.462 { 00:30:21.462 "method": "bdev_wait_for_examine" 00:30:21.462 } 00:30:21.462 ] 00:30:21.462 } 00:30:21.462 ] 00:30:21.462 } 00:30:21.721 [2024-11-29 12:14:27.011297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.721 [2024-11-29 12:14:27.108525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.098  [2024-11-29T12:14:29.545Z] Copying: 172/512 [MB] (172 MBps) [2024-11-29T12:14:30.480Z] Copying: 345/512 [MB] (172 MBps) [2024-11-29T12:14:31.447Z] Copying: 512/512 [MB] (average 172 MBps) 00:30:25.936 00:30:25.936 12:14:31 -- dd/malloc.sh@33 -- # gen_conf 00:30:25.936 12:14:31 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:30:25.936 12:14:31 -- dd/common.sh@31 -- # xtrace_disable 00:30:25.936 12:14:31 -- common/autotest_common.sh@10 -- # set +x 00:30:25.936 [2024-11-29 12:14:31.175131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:25.936 [2024-11-29 12:14:31.175387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146173 ] 00:30:25.936 { 00:30:25.936 "subsystems": [ 00:30:25.936 { 00:30:25.936 "subsystem": "bdev", 00:30:25.936 "config": [ 00:30:25.936 { 00:30:25.936 "params": { 00:30:25.936 "block_size": 512, 00:30:25.936 "num_blocks": 1048576, 00:30:25.936 "name": "malloc0" 00:30:25.936 }, 00:30:25.936 "method": "bdev_malloc_create" 00:30:25.936 }, 00:30:25.936 { 00:30:25.936 "params": { 00:30:25.936 "block_size": 512, 00:30:25.936 "num_blocks": 1048576, 00:30:25.936 "name": "malloc1" 00:30:25.936 }, 00:30:25.936 "method": "bdev_malloc_create" 00:30:25.936 }, 00:30:25.936 { 00:30:25.936 "method": "bdev_wait_for_examine" 00:30:25.936 } 00:30:25.936 ] 00:30:25.936 } 00:30:25.936 ] 00:30:25.936 } 00:30:25.936 [2024-11-29 12:14:31.322497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.936 [2024-11-29 12:14:31.418568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.318  [2024-11-29T12:14:34.203Z] Copying: 174/512 [MB] (174 MBps) [2024-11-29T12:14:34.769Z] Copying: 349/512 [MB] (174 MBps) [2024-11-29T12:14:35.702Z] Copying: 512/512 [MB] (average 174 MBps) 00:30:30.191 00:30:30.191 00:30:30.191 real 0m8.601s 00:30:30.191 user 0m7.458s 00:30:30.191 sys 0m0.992s 00:30:30.191 ************************************ 00:30:30.191 END TEST dd_malloc_copy 00:30:30.191 12:14:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:30.191 12:14:35 -- common/autotest_common.sh@10 -- # set +x 00:30:30.191 ************************************ 00:30:30.191 00:30:30.191 real 0m8.818s 00:30:30.191 user 0m7.638s 00:30:30.191 sys 0m1.039s 00:30:30.191 12:14:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:30.191 ************************************ 00:30:30.191 END TEST spdk_dd_malloc 00:30:30.191 ************************************ 00:30:30.191 12:14:35 -- common/autotest_common.sh@10 -- # set +x 00:30:30.191 12:14:35 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:30:30.191 12:14:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:30.191 12:14:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:30.191 12:14:35 -- common/autotest_common.sh@10 -- # set +x 00:30:30.191 ************************************ 00:30:30.191 START TEST spdk_dd_bdev_to_bdev 00:30:30.191 ************************************ 00:30:30.191 12:14:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:30:30.191 * Looking for test storage... 00:30:30.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:30.191 12:14:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:30.191 12:14:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:30.191 12:14:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:30.191 12:14:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:30.191 12:14:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:30.191 12:14:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:30.191 12:14:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:30.191 12:14:35 -- scripts/common.sh@335 -- # IFS=.-: 00:30:30.191 12:14:35 -- scripts/common.sh@335 -- # read -ra ver1 00:30:30.191 12:14:35 -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.191 12:14:35 -- scripts/common.sh@336 -- # read -ra ver2 00:30:30.191 12:14:35 -- scripts/common.sh@337 -- # local 'op=<' 00:30:30.191 12:14:35 -- scripts/common.sh@339 -- # ver1_l=2 00:30:30.191 12:14:35 -- scripts/common.sh@340 -- # ver2_l=1 00:30:30.191 12:14:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:30.191 12:14:35 -- scripts/common.sh@343 -- # case "$op" in 00:30:30.191 12:14:35 -- scripts/common.sh@344 -- # : 1 00:30:30.191 12:14:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:30.191 12:14:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.191 12:14:35 -- scripts/common.sh@364 -- # decimal 1 00:30:30.191 12:14:35 -- scripts/common.sh@352 -- # local d=1 00:30:30.191 12:14:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.191 12:14:35 -- scripts/common.sh@354 -- # echo 1 00:30:30.191 12:14:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:30.191 12:14:35 -- scripts/common.sh@365 -- # decimal 2 00:30:30.191 12:14:35 -- scripts/common.sh@352 -- # local d=2 00:30:30.191 12:14:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.191 12:14:35 -- scripts/common.sh@354 -- # echo 2 00:30:30.191 12:14:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:30.191 12:14:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:30.191 12:14:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:30.191 12:14:35 -- scripts/common.sh@367 -- # return 0 00:30:30.191 12:14:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.191 12:14:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:30.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.191 --rc genhtml_branch_coverage=1 00:30:30.191 --rc genhtml_function_coverage=1 00:30:30.191 --rc genhtml_legend=1 00:30:30.191 --rc geninfo_all_blocks=1 00:30:30.191 --rc geninfo_unexecuted_blocks=1 00:30:30.191 00:30:30.191 ' 00:30:30.191 12:14:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:30.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.191 --rc genhtml_branch_coverage=1 00:30:30.191 --rc genhtml_function_coverage=1 00:30:30.191 --rc genhtml_legend=1 00:30:30.191 --rc geninfo_all_blocks=1 00:30:30.191 --rc geninfo_unexecuted_blocks=1 00:30:30.191 00:30:30.191 ' 00:30:30.191 12:14:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:30.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.191 --rc genhtml_branch_coverage=1 00:30:30.191 --rc genhtml_function_coverage=1 00:30:30.191 --rc genhtml_legend=1 00:30:30.192 --rc geninfo_all_blocks=1 00:30:30.192 --rc geninfo_unexecuted_blocks=1 00:30:30.192 00:30:30.192 ' 00:30:30.192 12:14:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:30.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.192 --rc genhtml_branch_coverage=1 00:30:30.192 --rc genhtml_function_coverage=1 00:30:30.192 --rc genhtml_legend=1 00:30:30.192 --rc geninfo_all_blocks=1 00:30:30.192 --rc geninfo_unexecuted_blocks=1 00:30:30.192 00:30:30.192 ' 00:30:30.192 12:14:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:30.192 12:14:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.192 12:14:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.192 12:14:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.192 12:14:35 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:30.192 12:14:35 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:30.192 12:14:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:30.192 12:14:35 -- paths/export.sh@5 -- # export PATH 00:30:30.192 12:14:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:30:30.192 12:14:35 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:30:30.451 [2024-11-29 12:14:35.723071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:30.451 [2024-11-29 12:14:35.723366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146291 ] 00:30:30.451 [2024-11-29 12:14:35.877178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.708 [2024-11-29 12:14:35.979911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.966  [2024-11-29T12:14:36.736Z] Copying: 256/256 [MB] (average 1163 MBps) 00:30:31.225 00:30:31.225 12:14:36 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:31.225 12:14:36 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:31.225 12:14:36 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:30:31.225 12:14:36 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:30:31.225 12:14:36 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:30:31.225 12:14:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:30:31.225 12:14:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:31.225 12:14:36 -- common/autotest_common.sh@10 -- # set +x 00:30:31.225 ************************************ 00:30:31.225 START TEST dd_inflate_file 00:30:31.225 ************************************ 00:30:31.225 12:14:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:30:31.225 [2024-11-29 12:14:36.647217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:31.225 [2024-11-29 12:14:36.648019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146314 ] 00:30:31.483 [2024-11-29 12:14:36.793582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.483 [2024-11-29 12:14:36.889926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.741  [2024-11-29T12:14:37.560Z] Copying: 64/64 [MB] (average 1163 MBps) 00:30:32.049 00:30:32.049 00:30:32.049 real 0m0.730s 00:30:32.049 user 0m0.352s 00:30:32.049 sys 0m0.245s 00:30:32.049 12:14:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:32.049 ************************************ 00:30:32.049 END TEST dd_inflate_file 00:30:32.049 ************************************ 00:30:32.049 12:14:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.049 12:14:37 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:30:32.049 12:14:37 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:30:32.049 12:14:37 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:30:32.049 12:14:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:30:32.049 12:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:32.049 12:14:37 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:30:32.049 12:14:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.049 12:14:37 -- dd/common.sh@31 -- # xtrace_disable 00:30:32.049 12:14:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.049 ************************************ 00:30:32.049 START TEST dd_copy_to_out_bdev 00:30:32.049 ************************************ 00:30:32.049 12:14:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:30:32.049 [2024-11-29 12:14:37.433011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:32.049 [2024-11-29 12:14:37.433323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146362 ] 00:30:32.049 { 00:30:32.049 "subsystems": [ 00:30:32.049 { 00:30:32.049 "subsystem": "bdev", 00:30:32.049 "config": [ 00:30:32.049 { 00:30:32.049 "params": { 00:30:32.049 "block_size": 4096, 00:30:32.049 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:32.049 "name": "aio1" 00:30:32.049 }, 00:30:32.049 "method": "bdev_aio_create" 00:30:32.049 }, 00:30:32.049 { 00:30:32.049 "params": { 00:30:32.049 "trtype": "pcie", 00:30:32.049 "traddr": "0000:00:06.0", 00:30:32.049 "name": "Nvme0" 00:30:32.049 }, 00:30:32.049 "method": "bdev_nvme_attach_controller" 00:30:32.049 }, 00:30:32.049 { 00:30:32.049 "method": "bdev_wait_for_examine" 00:30:32.049 } 00:30:32.049 ] 00:30:32.049 } 00:30:32.049 ] 00:30:32.049 } 00:30:32.307 [2024-11-29 12:14:37.587387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.307 [2024-11-29 12:14:37.683136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.682  [2024-11-29T12:14:39.451Z] Copying: 46/64 [MB] (46 MBps) [2024-11-29T12:14:39.709Z] Copying: 64/64 [MB] (average 46 MBps) 00:30:34.198 00:30:34.198 00:30:34.198 real 0m2.233s 00:30:34.198 user 0m1.843s 00:30:34.198 sys 0m0.282s 00:30:34.198 12:14:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:34.198 12:14:39 -- common/autotest_common.sh@10 -- # set +x 00:30:34.198 ************************************ 00:30:34.198 END TEST dd_copy_to_out_bdev 00:30:34.198 ************************************ 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:30:34.198 12:14:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:34.198 12:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:34.198 12:14:39 -- common/autotest_common.sh@10 -- # set +x 00:30:34.198 ************************************ 00:30:34.198 START TEST dd_offset_magic 00:30:34.198 ************************************ 00:30:34.198 12:14:39 -- common/autotest_common.sh@1114 -- # offset_magic 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:30:34.198 12:14:39 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:30:34.198 12:14:39 -- dd/common.sh@31 -- # xtrace_disable 00:30:34.198 12:14:39 -- common/autotest_common.sh@10 -- # set +x 00:30:34.454 { 00:30:34.454 "subsystems": [ 00:30:34.454 { 00:30:34.454 "subsystem": "bdev", 00:30:34.454 "config": [ 00:30:34.454 { 00:30:34.454 "params": { 00:30:34.454 "block_size": 4096, 00:30:34.454 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:34.454 "name": "aio1" 00:30:34.454 }, 00:30:34.454 "method": "bdev_aio_create" 00:30:34.454 }, 00:30:34.454 { 00:30:34.454 "params": { 00:30:34.454 "trtype": "pcie", 00:30:34.454 "traddr": "0000:00:06.0", 00:30:34.454 "name": "Nvme0" 00:30:34.454 }, 00:30:34.454 "method": "bdev_nvme_attach_controller" 00:30:34.454 }, 00:30:34.454 { 00:30:34.454 "method": "bdev_wait_for_examine" 00:30:34.454 } 00:30:34.454 ] 00:30:34.454 } 00:30:34.454 ] 00:30:34.454 } 00:30:34.454 [2024-11-29 12:14:39.721211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:34.455 [2024-11-29 12:14:39.721493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146409 ] 00:30:34.455 [2024-11-29 12:14:39.877991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.740 [2024-11-29 12:14:39.974194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.306  [2024-11-29T12:14:41.075Z] Copying: 65/65 [MB] (average 159 MBps) 00:30:35.564 00:30:35.564 12:14:40 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:30:35.564 12:14:40 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:30:35.564 12:14:40 -- dd/common.sh@31 -- # xtrace_disable 00:30:35.564 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:30:35.564 [2024-11-29 12:14:40.959416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:35.564 [2024-11-29 12:14:40.960274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146442 ] 00:30:35.564 { 00:30:35.564 "subsystems": [ 00:30:35.564 { 00:30:35.564 "subsystem": "bdev", 00:30:35.564 "config": [ 00:30:35.564 { 00:30:35.564 "params": { 00:30:35.564 "block_size": 4096, 00:30:35.564 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:35.564 "name": "aio1" 00:30:35.564 }, 00:30:35.564 "method": "bdev_aio_create" 00:30:35.564 }, 00:30:35.564 { 00:30:35.564 "params": { 00:30:35.564 "trtype": "pcie", 00:30:35.564 "traddr": "0000:00:06.0", 00:30:35.564 "name": "Nvme0" 00:30:35.564 }, 00:30:35.564 "method": "bdev_nvme_attach_controller" 00:30:35.564 }, 00:30:35.564 { 00:30:35.564 "method": "bdev_wait_for_examine" 00:30:35.564 } 00:30:35.564 ] 00:30:35.564 } 00:30:35.564 ] 00:30:35.564 } 00:30:35.821 [2024-11-29 12:14:41.109921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.821 [2024-11-29 12:14:41.213016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.080  [2024-11-29T12:14:41.849Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:30:36.338 00:30:36.338 12:14:41 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:30:36.338 12:14:41 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:30:36.338 12:14:41 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:30:36.338 12:14:41 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:30:36.338 12:14:41 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:30:36.338 12:14:41 -- dd/common.sh@31 -- # xtrace_disable 00:30:36.338 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:30:36.338 [2024-11-29 12:14:41.833366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:36.338 [2024-11-29 12:14:41.833592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146457 ] 00:30:36.338 { 00:30:36.338 "subsystems": [ 00:30:36.338 { 00:30:36.338 "subsystem": "bdev", 00:30:36.338 "config": [ 00:30:36.338 { 00:30:36.338 "params": { 00:30:36.338 "block_size": 4096, 00:30:36.338 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:36.338 "name": "aio1" 00:30:36.338 }, 00:30:36.338 "method": "bdev_aio_create" 00:30:36.338 }, 00:30:36.338 { 00:30:36.338 "params": { 00:30:36.338 "trtype": "pcie", 00:30:36.338 "traddr": "0000:00:06.0", 00:30:36.338 "name": "Nvme0" 00:30:36.338 }, 00:30:36.338 "method": "bdev_nvme_attach_controller" 00:30:36.338 }, 00:30:36.338 { 00:30:36.338 "method": "bdev_wait_for_examine" 00:30:36.338 } 00:30:36.338 ] 00:30:36.338 } 00:30:36.338 ] 00:30:36.338 } 00:30:36.597 [2024-11-29 12:14:41.984381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.597 [2024-11-29 12:14:42.086560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.162  [2024-11-29T12:14:42.931Z] Copying: 65/65 [MB] (average 255 MBps) 00:30:37.420 00:30:37.420 12:14:42 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:30:37.420 12:14:42 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:30:37.420 12:14:42 -- dd/common.sh@31 -- # xtrace_disable 00:30:37.420 12:14:42 -- common/autotest_common.sh@10 -- # set +x 00:30:37.420 [2024-11-29 12:14:42.930005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:37.420 [2024-11-29 12:14:42.930218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146486 ] 00:30:37.420 { 00:30:37.420 "subsystems": [ 00:30:37.420 { 00:30:37.420 "subsystem": "bdev", 00:30:37.420 "config": [ 00:30:37.420 { 00:30:37.420 "params": { 00:30:37.420 "block_size": 4096, 00:30:37.420 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:37.420 "name": "aio1" 00:30:37.420 }, 00:30:37.420 "method": "bdev_aio_create" 00:30:37.420 }, 00:30:37.420 { 00:30:37.420 "params": { 00:30:37.420 "trtype": "pcie", 00:30:37.420 "traddr": "0000:00:06.0", 00:30:37.420 "name": "Nvme0" 00:30:37.420 }, 00:30:37.420 "method": "bdev_nvme_attach_controller" 00:30:37.420 }, 00:30:37.420 { 00:30:37.420 "method": "bdev_wait_for_examine" 00:30:37.420 } 00:30:37.420 ] 00:30:37.420 } 00:30:37.421 ] 00:30:37.421 } 00:30:37.679 [2024-11-29 12:14:43.070198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.679 [2024-11-29 12:14:43.166978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.938  [2024-11-29T12:14:44.016Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:30:38.505 00:30:38.505 12:14:43 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:30:38.505 12:14:43 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:30:38.505 00:30:38.505 real 0m4.067s 00:30:38.505 user 0m2.314s 00:30:38.505 sys 0m0.928s 00:30:38.505 12:14:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:38.505 ************************************ 00:30:38.505 END TEST dd_offset_magic 00:30:38.505 ************************************ 00:30:38.505 12:14:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.505 12:14:43 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:30:38.505 12:14:43 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:30:38.505 12:14:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:38.505 12:14:43 -- dd/common.sh@11 -- # local nvme_ref= 00:30:38.505 12:14:43 -- dd/common.sh@12 -- # local size=4194330 00:30:38.505 12:14:43 -- dd/common.sh@14 -- # local bs=1048576 00:30:38.505 12:14:43 -- dd/common.sh@15 -- # local count=5 00:30:38.505 12:14:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:30:38.505 12:14:43 -- dd/common.sh@18 -- # gen_conf 00:30:38.505 12:14:43 -- dd/common.sh@31 -- # xtrace_disable 00:30:38.505 12:14:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.505 [2024-11-29 12:14:43.823970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:38.505 [2024-11-29 12:14:43.824168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146515 ] 00:30:38.505 { 00:30:38.505 "subsystems": [ 00:30:38.505 { 00:30:38.505 "subsystem": "bdev", 00:30:38.505 "config": [ 00:30:38.505 { 00:30:38.505 "params": { 00:30:38.506 "block_size": 4096, 00:30:38.506 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:38.506 "name": "aio1" 00:30:38.506 }, 00:30:38.506 "method": "bdev_aio_create" 00:30:38.506 }, 00:30:38.506 { 00:30:38.506 "params": { 00:30:38.506 "trtype": "pcie", 00:30:38.506 "traddr": "0000:00:06.0", 00:30:38.506 "name": "Nvme0" 00:30:38.506 }, 00:30:38.506 "method": "bdev_nvme_attach_controller" 00:30:38.506 }, 00:30:38.506 { 00:30:38.506 "method": "bdev_wait_for_examine" 00:30:38.506 } 00:30:38.506 ] 00:30:38.506 } 00:30:38.506 ] 00:30:38.506 } 00:30:38.506 [2024-11-29 12:14:43.964717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.764 [2024-11-29 12:14:44.061009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.764  [2024-11-29T12:14:44.842Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:30:39.331 00:30:39.331 12:14:44 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:30:39.331 12:14:44 -- dd/common.sh@10 -- # local bdev=aio1 00:30:39.331 12:14:44 -- dd/common.sh@11 -- # local nvme_ref= 00:30:39.331 12:14:44 -- dd/common.sh@12 -- # local size=4194330 00:30:39.331 12:14:44 -- dd/common.sh@14 -- # local bs=1048576 00:30:39.331 12:14:44 -- dd/common.sh@15 -- # local count=5 00:30:39.331 12:14:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:30:39.331 12:14:44 -- dd/common.sh@18 -- # gen_conf 00:30:39.331 12:14:44 -- dd/common.sh@31 -- # xtrace_disable 00:30:39.331 12:14:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.331 [2024-11-29 12:14:44.631230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:39.331 [2024-11-29 12:14:44.631470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146533 ] 00:30:39.331 { 00:30:39.331 "subsystems": [ 00:30:39.331 { 00:30:39.331 "subsystem": "bdev", 00:30:39.331 "config": [ 00:30:39.331 { 00:30:39.331 "params": { 00:30:39.331 "block_size": 4096, 00:30:39.331 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:39.331 "name": "aio1" 00:30:39.331 }, 00:30:39.331 "method": "bdev_aio_create" 00:30:39.331 }, 00:30:39.331 { 00:30:39.331 "params": { 00:30:39.331 "trtype": "pcie", 00:30:39.331 "traddr": "0000:00:06.0", 00:30:39.331 "name": "Nvme0" 00:30:39.331 }, 00:30:39.331 "method": "bdev_nvme_attach_controller" 00:30:39.331 }, 00:30:39.331 { 00:30:39.331 "method": "bdev_wait_for_examine" 00:30:39.332 } 00:30:39.332 ] 00:30:39.332 } 00:30:39.332 ] 00:30:39.332 } 00:30:39.332 [2024-11-29 12:14:44.778719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.589 [2024-11-29 12:14:44.876282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.589  [2024-11-29T12:14:45.665Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:30:40.154 00:30:40.154 12:14:45 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:30:40.154 00:30:40.154 real 0m9.981s 00:30:40.154 user 0m6.157s 00:30:40.154 sys 0m2.401s 00:30:40.154 12:14:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:40.154 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.154 ************************************ 00:30:40.154 END TEST spdk_dd_bdev_to_bdev 00:30:40.154 ************************************ 00:30:40.154 12:14:45 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:30:40.154 12:14:45 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:30:40.154 12:14:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:40.154 12:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:40.154 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.154 ************************************ 00:30:40.154 START TEST spdk_dd_sparse 00:30:40.154 ************************************ 00:30:40.154 12:14:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:30:40.154 * Looking for test storage... 00:30:40.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:40.154 12:14:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:40.154 12:14:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:40.154 12:14:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:40.413 12:14:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:40.413 12:14:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:40.413 12:14:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:40.413 12:14:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:40.413 12:14:45 -- scripts/common.sh@335 -- # IFS=.-: 00:30:40.413 12:14:45 -- scripts/common.sh@335 -- # read -ra ver1 00:30:40.413 12:14:45 -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.413 12:14:45 -- scripts/common.sh@336 -- # read -ra ver2 00:30:40.413 12:14:45 -- scripts/common.sh@337 -- # local 'op=<' 00:30:40.413 12:14:45 -- scripts/common.sh@339 -- # ver1_l=2 00:30:40.413 12:14:45 -- scripts/common.sh@340 -- # ver2_l=1 00:30:40.413 12:14:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:40.413 12:14:45 -- scripts/common.sh@343 -- # case "$op" in 00:30:40.413 12:14:45 -- scripts/common.sh@344 -- # : 1 00:30:40.413 12:14:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:40.413 12:14:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.413 12:14:45 -- scripts/common.sh@364 -- # decimal 1 00:30:40.413 12:14:45 -- scripts/common.sh@352 -- # local d=1 00:30:40.413 12:14:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.413 12:14:45 -- scripts/common.sh@354 -- # echo 1 00:30:40.413 12:14:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:40.413 12:14:45 -- scripts/common.sh@365 -- # decimal 2 00:30:40.413 12:14:45 -- scripts/common.sh@352 -- # local d=2 00:30:40.413 12:14:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.413 12:14:45 -- scripts/common.sh@354 -- # echo 2 00:30:40.413 12:14:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:40.413 12:14:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:40.413 12:14:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:40.413 12:14:45 -- scripts/common.sh@367 -- # return 0 00:30:40.413 12:14:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.413 12:14:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:40.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.413 --rc genhtml_branch_coverage=1 00:30:40.413 --rc genhtml_function_coverage=1 00:30:40.414 --rc genhtml_legend=1 00:30:40.414 --rc geninfo_all_blocks=1 00:30:40.414 --rc geninfo_unexecuted_blocks=1 00:30:40.414 00:30:40.414 ' 00:30:40.414 12:14:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.414 --rc genhtml_branch_coverage=1 00:30:40.414 --rc genhtml_function_coverage=1 00:30:40.414 --rc genhtml_legend=1 00:30:40.414 --rc geninfo_all_blocks=1 00:30:40.414 --rc geninfo_unexecuted_blocks=1 00:30:40.414 00:30:40.414 ' 00:30:40.414 12:14:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.414 --rc genhtml_branch_coverage=1 00:30:40.414 --rc genhtml_function_coverage=1 00:30:40.414 --rc genhtml_legend=1 00:30:40.414 --rc geninfo_all_blocks=1 00:30:40.414 --rc geninfo_unexecuted_blocks=1 00:30:40.414 00:30:40.414 ' 00:30:40.414 12:14:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.414 --rc genhtml_branch_coverage=1 00:30:40.414 --rc genhtml_function_coverage=1 00:30:40.414 --rc genhtml_legend=1 00:30:40.414 --rc geninfo_all_blocks=1 00:30:40.414 --rc geninfo_unexecuted_blocks=1 00:30:40.414 00:30:40.414 ' 00:30:40.414 12:14:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:40.414 12:14:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.414 12:14:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.414 12:14:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.414 12:14:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:40.414 12:14:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:40.414 12:14:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:40.414 12:14:45 -- paths/export.sh@5 -- # export PATH 00:30:40.414 12:14:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:40.414 12:14:45 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:30:40.414 12:14:45 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:30:40.414 12:14:45 -- dd/sparse.sh@110 -- # file1=file_zero1 00:30:40.414 12:14:45 -- dd/sparse.sh@111 -- # file2=file_zero2 00:30:40.414 12:14:45 -- dd/sparse.sh@112 -- # file3=file_zero3 00:30:40.414 12:14:45 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:30:40.414 12:14:45 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:30:40.414 12:14:45 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:30:40.414 12:14:45 -- dd/sparse.sh@118 -- # prepare 00:30:40.414 12:14:45 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:30:40.414 12:14:45 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:30:40.414 1+0 records in 00:30:40.414 1+0 records out 00:30:40.414 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00982875 s, 427 MB/s 00:30:40.414 12:14:45 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:30:40.414 1+0 records in 00:30:40.414 1+0 records out 00:30:40.414 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00804785 s, 521 MB/s 00:30:40.414 12:14:45 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:30:40.414 1+0 records in 00:30:40.414 1+0 records out 00:30:40.414 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103953 s, 403 MB/s 00:30:40.414 12:14:45 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:30:40.414 12:14:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:40.414 12:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:40.414 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.414 ************************************ 00:30:40.414 START TEST dd_sparse_file_to_file 00:30:40.414 ************************************ 00:30:40.414 12:14:45 -- common/autotest_common.sh@1114 -- # file_to_file 00:30:40.414 12:14:45 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:30:40.414 12:14:45 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:30:40.414 12:14:45 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:30:40.414 12:14:45 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:30:40.414 12:14:45 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:30:40.414 12:14:45 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:30:40.414 12:14:45 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:30:40.414 12:14:45 -- dd/sparse.sh@41 -- # gen_conf 00:30:40.414 12:14:45 -- dd/common.sh@31 -- # xtrace_disable 00:30:40.414 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.414 { 00:30:40.414 "subsystems": [ 00:30:40.414 { 00:30:40.414 "subsystem": "bdev", 00:30:40.414 "config": [ 00:30:40.414 { 00:30:40.414 "params": { 00:30:40.414 "block_size": 4096, 00:30:40.414 "filename": "dd_sparse_aio_disk", 00:30:40.414 "name": "dd_aio" 00:30:40.414 }, 00:30:40.414 "method": "bdev_aio_create" 00:30:40.414 }, 00:30:40.414 { 00:30:40.414 "params": { 00:30:40.414 "lvs_name": "dd_lvstore", 00:30:40.414 "bdev_name": "dd_aio" 00:30:40.414 }, 00:30:40.414 "method": "bdev_lvol_create_lvstore" 00:30:40.414 }, 00:30:40.414 { 00:30:40.414 "method": "bdev_wait_for_examine" 00:30:40.414 } 00:30:40.414 ] 00:30:40.414 } 00:30:40.414 ] 00:30:40.414 } 00:30:40.414 [2024-11-29 12:14:45.846959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:40.414 [2024-11-29 12:14:45.847350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146615 ] 00:30:40.673 [2024-11-29 12:14:46.012665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.673 [2024-11-29 12:14:46.115881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.931  [2024-11-29T12:14:46.701Z] Copying: 12/36 [MB] (average 750 MBps) 00:30:41.190 00:30:41.190 12:14:46 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:30:41.190 12:14:46 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:30:41.190 12:14:46 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:30:41.190 12:14:46 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:30:41.190 12:14:46 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:30:41.190 12:14:46 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:30:41.190 12:14:46 -- dd/sparse.sh@52 -- # stat1_b=24576 00:30:41.190 12:14:46 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:30:41.190 12:14:46 -- dd/sparse.sh@53 -- # stat2_b=24576 00:30:41.190 12:14:46 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:30:41.190 00:30:41.190 real 0m0.890s 00:30:41.190 user 0m0.483s 00:30:41.190 sys 0m0.270s 00:30:41.190 12:14:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:41.190 ************************************ 00:30:41.190 END TEST dd_sparse_file_to_file 00:30:41.190 ************************************ 00:30:41.190 12:14:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.449 12:14:46 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:30:41.449 12:14:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:41.449 12:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:41.449 12:14:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.449 ************************************ 00:30:41.449 START TEST dd_sparse_file_to_bdev 00:30:41.449 ************************************ 00:30:41.449 12:14:46 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:30:41.449 12:14:46 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:30:41.449 12:14:46 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:30:41.449 12:14:46 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:30:41.449 12:14:46 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:30:41.449 12:14:46 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:30:41.449 12:14:46 -- dd/sparse.sh@73 -- # gen_conf 00:30:41.449 12:14:46 -- dd/common.sh@31 -- # xtrace_disable 00:30:41.449 12:14:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.449 [2024-11-29 12:14:46.764389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:41.449 [2024-11-29 12:14:46.765056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146668 ] 00:30:41.449 { 00:30:41.449 "subsystems": [ 00:30:41.449 { 00:30:41.449 "subsystem": "bdev", 00:30:41.449 "config": [ 00:30:41.449 { 00:30:41.449 "params": { 00:30:41.449 "block_size": 4096, 00:30:41.449 "filename": "dd_sparse_aio_disk", 00:30:41.449 "name": "dd_aio" 00:30:41.449 }, 00:30:41.449 "method": "bdev_aio_create" 00:30:41.449 }, 00:30:41.449 { 00:30:41.449 "params": { 00:30:41.449 "lvs_name": "dd_lvstore", 00:30:41.449 "lvol_name": "dd_lvol", 00:30:41.449 "size": 37748736, 00:30:41.449 "thin_provision": true 00:30:41.449 }, 00:30:41.449 "method": "bdev_lvol_create" 00:30:41.449 }, 00:30:41.449 { 00:30:41.449 "method": "bdev_wait_for_examine" 00:30:41.449 } 00:30:41.449 ] 00:30:41.449 } 00:30:41.449 ] 00:30:41.449 } 00:30:41.449 [2024-11-29 12:14:46.907585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.708 [2024-11-29 12:14:47.004149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.708 [2024-11-29 12:14:47.104384] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:30:41.708  [2024-11-29T12:14:47.219Z] Copying: 12/36 [MB] (average 571 MBps)[2024-11-29 12:14:47.145889] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:30:41.966 00:30:41.966 00:30:42.224 00:30:42.224 real 0m0.768s 00:30:42.224 user 0m0.467s 00:30:42.224 sys 0m0.203s 00:30:42.224 ************************************ 00:30:42.224 END TEST dd_sparse_file_to_bdev 00:30:42.224 ************************************ 00:30:42.224 12:14:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:42.224 12:14:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.224 12:14:47 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:30:42.224 12:14:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:42.224 12:14:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:42.224 12:14:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.224 ************************************ 00:30:42.224 START TEST dd_sparse_bdev_to_file 00:30:42.224 ************************************ 00:30:42.225 12:14:47 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:30:42.225 12:14:47 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:30:42.225 12:14:47 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:30:42.225 12:14:47 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:30:42.225 12:14:47 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:30:42.225 12:14:47 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:30:42.225 12:14:47 -- dd/sparse.sh@91 -- # gen_conf 00:30:42.225 12:14:47 -- dd/common.sh@31 -- # xtrace_disable 00:30:42.225 12:14:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.225 [2024-11-29 12:14:47.583112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:42.225 [2024-11-29 12:14:47.584019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146717 ] 00:30:42.225 { 00:30:42.225 "subsystems": [ 00:30:42.225 { 00:30:42.225 "subsystem": "bdev", 00:30:42.225 "config": [ 00:30:42.225 { 00:30:42.225 "params": { 00:30:42.225 "block_size": 4096, 00:30:42.225 "filename": "dd_sparse_aio_disk", 00:30:42.225 "name": "dd_aio" 00:30:42.225 }, 00:30:42.225 "method": "bdev_aio_create" 00:30:42.225 }, 00:30:42.225 { 00:30:42.225 "method": "bdev_wait_for_examine" 00:30:42.225 } 00:30:42.225 ] 00:30:42.225 } 00:30:42.225 ] 00:30:42.225 } 00:30:42.225 [2024-11-29 12:14:47.731133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.483 [2024-11-29 12:14:47.827434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.483  [2024-11-29T12:14:48.561Z] Copying: 12/36 [MB] (average 923 MBps) 00:30:43.050 00:30:43.050 12:14:48 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:30:43.050 12:14:48 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:30:43.050 12:14:48 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:30:43.050 12:14:48 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:30:43.050 12:14:48 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:30:43.050 12:14:48 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:30:43.050 12:14:48 -- dd/sparse.sh@102 -- # stat2_b=24576 00:30:43.050 12:14:48 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:30:43.050 12:14:48 -- dd/sparse.sh@103 -- # stat3_b=24576 00:30:43.050 12:14:48 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:30:43.050 00:30:43.050 real 0m0.803s 00:30:43.050 user 0m0.471s 00:30:43.050 sys 0m0.215s 00:30:43.050 12:14:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.050 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.050 ************************************ 00:30:43.050 END TEST dd_sparse_bdev_to_file 00:30:43.050 ************************************ 00:30:43.050 12:14:48 -- dd/sparse.sh@1 -- # cleanup 00:30:43.050 12:14:48 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:30:43.050 12:14:48 -- dd/sparse.sh@12 -- # rm file_zero1 00:30:43.050 12:14:48 -- dd/sparse.sh@13 -- # rm file_zero2 00:30:43.050 12:14:48 -- dd/sparse.sh@14 -- # rm file_zero3 00:30:43.050 00:30:43.050 real 0m2.861s 00:30:43.050 user 0m1.646s 00:30:43.050 sys 0m0.861s 00:30:43.050 12:14:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.050 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.050 ************************************ 00:30:43.050 END TEST spdk_dd_sparse 00:30:43.050 ************************************ 00:30:43.050 12:14:48 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:30:43.050 12:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.050 12:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.050 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.050 ************************************ 00:30:43.050 START TEST spdk_dd_negative 00:30:43.050 ************************************ 00:30:43.050 12:14:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:30:43.050 * Looking for test storage... 00:30:43.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:43.050 12:14:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:43.050 12:14:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:43.050 12:14:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:43.309 12:14:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:43.309 12:14:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:43.309 12:14:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:43.309 12:14:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:43.309 12:14:48 -- scripts/common.sh@335 -- # IFS=.-: 00:30:43.309 12:14:48 -- scripts/common.sh@335 -- # read -ra ver1 00:30:43.309 12:14:48 -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.309 12:14:48 -- scripts/common.sh@336 -- # read -ra ver2 00:30:43.309 12:14:48 -- scripts/common.sh@337 -- # local 'op=<' 00:30:43.309 12:14:48 -- scripts/common.sh@339 -- # ver1_l=2 00:30:43.309 12:14:48 -- scripts/common.sh@340 -- # ver2_l=1 00:30:43.309 12:14:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:43.309 12:14:48 -- scripts/common.sh@343 -- # case "$op" in 00:30:43.309 12:14:48 -- scripts/common.sh@344 -- # : 1 00:30:43.309 12:14:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:43.309 12:14:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.309 12:14:48 -- scripts/common.sh@364 -- # decimal 1 00:30:43.309 12:14:48 -- scripts/common.sh@352 -- # local d=1 00:30:43.309 12:14:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.309 12:14:48 -- scripts/common.sh@354 -- # echo 1 00:30:43.309 12:14:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:43.309 12:14:48 -- scripts/common.sh@365 -- # decimal 2 00:30:43.309 12:14:48 -- scripts/common.sh@352 -- # local d=2 00:30:43.309 12:14:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.309 12:14:48 -- scripts/common.sh@354 -- # echo 2 00:30:43.309 12:14:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:43.309 12:14:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:43.309 12:14:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:43.309 12:14:48 -- scripts/common.sh@367 -- # return 0 00:30:43.309 12:14:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.309 12:14:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:43.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.309 --rc genhtml_branch_coverage=1 00:30:43.309 --rc genhtml_function_coverage=1 00:30:43.309 --rc genhtml_legend=1 00:30:43.309 --rc geninfo_all_blocks=1 00:30:43.309 --rc geninfo_unexecuted_blocks=1 00:30:43.309 00:30:43.309 ' 00:30:43.309 12:14:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:43.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.309 --rc genhtml_branch_coverage=1 00:30:43.309 --rc genhtml_function_coverage=1 00:30:43.309 --rc genhtml_legend=1 00:30:43.309 --rc geninfo_all_blocks=1 00:30:43.309 --rc geninfo_unexecuted_blocks=1 00:30:43.309 00:30:43.309 ' 00:30:43.309 12:14:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:43.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.309 --rc genhtml_branch_coverage=1 00:30:43.309 --rc genhtml_function_coverage=1 00:30:43.309 --rc genhtml_legend=1 00:30:43.309 --rc geninfo_all_blocks=1 00:30:43.309 --rc geninfo_unexecuted_blocks=1 00:30:43.309 00:30:43.309 ' 00:30:43.309 12:14:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:43.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.309 --rc genhtml_branch_coverage=1 00:30:43.309 --rc genhtml_function_coverage=1 00:30:43.309 --rc genhtml_legend=1 00:30:43.309 --rc geninfo_all_blocks=1 00:30:43.309 --rc geninfo_unexecuted_blocks=1 00:30:43.309 00:30:43.309 ' 00:30:43.309 12:14:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:43.309 12:14:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.309 12:14:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.310 12:14:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.310 12:14:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:43.310 12:14:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:43.310 12:14:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:43.310 12:14:48 -- paths/export.sh@5 -- # export PATH 00:30:43.310 12:14:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:43.310 12:14:48 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:43.310 12:14:48 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:43.310 12:14:48 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:43.310 12:14:48 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:43.310 12:14:48 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:30:43.310 12:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.310 12:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.310 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.310 ************************************ 00:30:43.310 START TEST dd_invalid_arguments 00:30:43.310 ************************************ 00:30:43.310 12:14:48 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:30:43.310 12:14:48 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:30:43.310 12:14:48 -- common/autotest_common.sh@650 -- # local es=0 00:30:43.310 12:14:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:30:43.310 12:14:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.310 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.310 12:14:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.310 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.310 12:14:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.310 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.310 12:14:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.310 12:14:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:43.310 12:14:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:30:43.310 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:30:43.310 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:30:43.310 options: 00:30:43.310 -c, --config JSON config file (default none) 00:30:43.310 --json JSON config file (default none) 00:30:43.310 --json-ignore-init-errors 00:30:43.310 don't exit on invalid config entry 00:30:43.310 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:30:43.310 -g, --single-file-segments 00:30:43.310 force creating just one hugetlbfs file 00:30:43.310 -h, --help show this usage 00:30:43.310 -i, --shm-id shared memory ID (optional) 00:30:43.310 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:30:43.310 --lcores lcore to CPU mapping list. The list is in the format: 00:30:43.310 [<,lcores[@CPUs]>...] 00:30:43.310 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:30:43.310 Within the group, '-' is used for range separator, 00:30:43.310 ',' is used for single number separator. 00:30:43.310 '( )' can be omitted for single element group, 00:30:43.310 '@' can be omitted if cpus and lcores have the same value 00:30:43.310 -n, --mem-channels channel number of memory channels used for DPDK 00:30:43.310 -p, --main-core main (primary) core for DPDK 00:30:43.310 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:30:43.310 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:30:43.310 --disable-cpumask-locks Disable CPU core lock files. 00:30:43.310 --silence-noticelog disable notice level logging to stderr 00:30:43.310 --msg-mempool-size global message memory pool size in count (default: 262143) 00:30:43.310 -u, --no-pci disable PCI access 00:30:43.310 --wait-for-rpc wait for RPCs to initialize subsystems 00:30:43.310 --max-delay maximum reactor delay (in microseconds) 00:30:43.310 -B, --pci-blocked pci addr to block (can be used more than once) 00:30:43.310 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:30:43.310 -R, --huge-unlink unlink huge files after initialization 00:30:43.310 -v, --version print SPDK version 00:30:43.310 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:30:43.310 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:30:43.310 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:30:43.310 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:30:43.310 Tracepoints vary in size and can use more than one trace entry. 00:30:43.310 --rpcs-allowed comma-separated list of permitted RPCS 00:30:43.310 --env-context Opaque context for use of the env implementation 00:30:43.310 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:30:43.310 --no-huge run without using hugepages 00:30:43.310 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:30:43.310 -e, --tpoint-group [:] 00:30:43.310 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:30:43.310 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:30:43.310 Groups and [2024-11-29 12:14:48.688311] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:30:43.310 masks can be combined (e.g. thread,bdev:0x1). 00:30:43.310 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:30:43.310 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:30:43.310 [--------- DD Options ---------] 00:30:43.310 --if Input file. Must specify either --if or --ib. 00:30:43.310 --ib Input bdev. Must specifier either --if or --ib 00:30:43.310 --of Output file. Must specify either --of or --ob. 00:30:43.310 --ob Output bdev. Must specify either --of or --ob. 00:30:43.310 --iflag Input file flags. 00:30:43.310 --oflag Output file flags. 00:30:43.310 --bs I/O unit size (default: 4096) 00:30:43.310 --qd Queue depth (default: 2) 00:30:43.310 --count I/O unit count. The number of I/O units to copy. (default: all) 00:30:43.310 --skip Skip this many I/O units at start of input. (default: 0) 00:30:43.310 --seek Skip this many I/O units at start of output. (default: 0) 00:30:43.310 --aio Force usage of AIO. (by default io_uring is used if available) 00:30:43.310 --sparse Enable hole skipping in input target 00:30:43.310 Available iflag and oflag values: 00:30:43.310 append - append mode 00:30:43.310 direct - use direct I/O for data 00:30:43.310 directory - fail unless a directory 00:30:43.310 dsync - use synchronized I/O for data 00:30:43.310 noatime - do not update access time 00:30:43.310 noctty - do not assign controlling terminal from file 00:30:43.310 nofollow - do not follow symlinks 00:30:43.310 nonblock - use non-blocking I/O 00:30:43.310 sync - use synchronized I/O for data and metadata 00:30:43.310 12:14:48 -- common/autotest_common.sh@653 -- # es=2 00:30:43.310 12:14:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.310 12:14:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.310 12:14:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.310 00:30:43.310 real 0m0.098s 00:30:43.310 user 0m0.050s 00:30:43.310 sys 0m0.047s 00:30:43.310 12:14:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.310 ************************************ 00:30:43.310 END TEST dd_invalid_arguments 00:30:43.310 ************************************ 00:30:43.310 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.310 12:14:48 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:30:43.310 12:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.310 12:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.310 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.310 ************************************ 00:30:43.310 START TEST dd_double_input 00:30:43.310 ************************************ 00:30:43.310 12:14:48 -- common/autotest_common.sh@1114 -- # double_input 00:30:43.311 12:14:48 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:30:43.311 12:14:48 -- common/autotest_common.sh@650 -- # local es=0 00:30:43.311 12:14:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:30:43.311 12:14:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.311 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.311 12:14:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.311 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.311 12:14:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.311 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.311 12:14:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.311 12:14:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:43.311 12:14:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:30:43.569 [2024-11-29 12:14:48.834046] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:30:43.569 ************************************ 00:30:43.569 END TEST dd_double_input 00:30:43.569 ************************************ 00:30:43.569 12:14:48 -- common/autotest_common.sh@653 -- # es=22 00:30:43.569 12:14:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.569 12:14:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.569 12:14:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.569 00:30:43.569 real 0m0.105s 00:30:43.569 user 0m0.056s 00:30:43.569 sys 0m0.049s 00:30:43.569 12:14:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.569 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.569 12:14:48 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:30:43.569 12:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.569 12:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.569 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.569 ************************************ 00:30:43.569 START TEST dd_double_output 00:30:43.569 ************************************ 00:30:43.569 12:14:48 -- common/autotest_common.sh@1114 -- # double_output 00:30:43.569 12:14:48 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:30:43.569 12:14:48 -- common/autotest_common.sh@650 -- # local es=0 00:30:43.569 12:14:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:30:43.569 12:14:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.569 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.569 12:14:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.569 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.569 12:14:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.569 12:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.569 12:14:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.569 12:14:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:43.569 12:14:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:30:43.569 [2024-11-29 12:14:48.980069] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:30:43.569 12:14:49 -- common/autotest_common.sh@653 -- # es=22 00:30:43.570 12:14:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.570 12:14:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.570 12:14:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.570 00:30:43.570 real 0m0.089s 00:30:43.570 user 0m0.039s 00:30:43.570 sys 0m0.048s 00:30:43.570 12:14:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.570 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:43.570 ************************************ 00:30:43.570 END TEST dd_double_output 00:30:43.570 ************************************ 00:30:43.570 12:14:49 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:30:43.570 12:14:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.570 12:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.570 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:43.570 ************************************ 00:30:43.570 START TEST dd_no_input 00:30:43.570 ************************************ 00:30:43.570 12:14:49 -- common/autotest_common.sh@1114 -- # no_input 00:30:43.570 12:14:49 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:30:43.570 12:14:49 -- common/autotest_common.sh@650 -- # local es=0 00:30:43.570 12:14:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:30:43.570 12:14:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.570 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.570 12:14:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.570 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.570 12:14:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.570 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.570 12:14:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.570 12:14:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:43.570 12:14:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:30:43.828 [2024-11-29 12:14:49.116275] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:30:43.828 12:14:49 -- common/autotest_common.sh@653 -- # es=22 00:30:43.828 12:14:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.828 12:14:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.828 12:14:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.828 00:30:43.828 real 0m0.090s 00:30:43.828 user 0m0.062s 00:30:43.828 sys 0m0.028s 00:30:43.828 12:14:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.828 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:43.828 ************************************ 00:30:43.828 END TEST dd_no_input 00:30:43.828 ************************************ 00:30:43.828 12:14:49 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:30:43.828 12:14:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.828 12:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.828 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:43.828 ************************************ 00:30:43.828 START TEST dd_no_output 00:30:43.828 ************************************ 00:30:43.828 12:14:49 -- common/autotest_common.sh@1114 -- # no_output 00:30:43.828 12:14:49 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:43.828 12:14:49 -- common/autotest_common.sh@650 -- # local es=0 00:30:43.828 12:14:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:43.828 12:14:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.828 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.828 12:14:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.828 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.828 12:14:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.828 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.828 12:14:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.828 12:14:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:43.828 12:14:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:43.828 [2024-11-29 12:14:49.264006] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:30:43.828 12:14:49 -- common/autotest_common.sh@653 -- # es=22 00:30:43.828 12:14:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.828 12:14:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.828 12:14:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.828 00:30:43.828 real 0m0.099s 00:30:43.828 user 0m0.055s 00:30:43.828 sys 0m0.045s 00:30:43.828 12:14:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:43.828 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:43.828 ************************************ 00:30:43.828 END TEST dd_no_output 00:30:43.828 ************************************ 00:30:44.087 12:14:49 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:30:44.087 12:14:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.087 12:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.087 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 ************************************ 00:30:44.087 START TEST dd_wrong_blocksize 00:30:44.087 ************************************ 00:30:44.087 12:14:49 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:30:44.087 12:14:49 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:30:44.087 12:14:49 -- common/autotest_common.sh@650 -- # local es=0 00:30:44.087 12:14:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:30:44.087 12:14:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.087 12:14:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.087 12:14:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:44.087 12:14:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:30:44.087 [2024-11-29 12:14:49.413341] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:30:44.087 12:14:49 -- common/autotest_common.sh@653 -- # es=22 00:30:44.087 12:14:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:44.087 12:14:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:44.087 12:14:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:44.087 00:30:44.087 real 0m0.095s 00:30:44.087 user 0m0.037s 00:30:44.087 sys 0m0.059s 00:30:44.087 12:14:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:44.087 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 ************************************ 00:30:44.087 END TEST dd_wrong_blocksize 00:30:44.087 ************************************ 00:30:44.087 12:14:49 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:30:44.087 12:14:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.087 12:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.087 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 ************************************ 00:30:44.087 START TEST dd_smaller_blocksize 00:30:44.087 ************************************ 00:30:44.087 12:14:49 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:30:44.087 12:14:49 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:30:44.087 12:14:49 -- common/autotest_common.sh@650 -- # local es=0 00:30:44.087 12:14:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:30:44.087 12:14:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.087 12:14:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.087 12:14:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.087 12:14:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:44.087 12:14:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:30:44.087 [2024-11-29 12:14:49.562984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:44.087 [2024-11-29 12:14:49.563252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146978 ] 00:30:44.346 [2024-11-29 12:14:49.713359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.346 [2024-11-29 12:14:49.810804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.604 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:30:44.604 [2024-11-29 12:14:49.987640] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:30:44.604 [2024-11-29 12:14:49.987752] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:44.604 [2024-11-29 12:14:50.116041] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:44.863 12:14:50 -- common/autotest_common.sh@653 -- # es=244 00:30:44.863 12:14:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:44.863 12:14:50 -- common/autotest_common.sh@662 -- # es=116 00:30:44.863 12:14:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:44.863 12:14:50 -- common/autotest_common.sh@670 -- # es=1 00:30:44.863 12:14:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:44.863 00:30:44.863 real 0m0.732s 00:30:44.863 user 0m0.394s 00:30:44.863 sys 0m0.238s 00:30:44.863 12:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:44.863 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:44.863 ************************************ 00:30:44.863 END TEST dd_smaller_blocksize 00:30:44.863 ************************************ 00:30:44.863 12:14:50 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:30:44.863 12:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.863 12:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.863 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:44.863 ************************************ 00:30:44.863 START TEST dd_invalid_count 00:30:44.863 ************************************ 00:30:44.863 12:14:50 -- common/autotest_common.sh@1114 -- # invalid_count 00:30:44.863 12:14:50 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:30:44.863 12:14:50 -- common/autotest_common.sh@650 -- # local es=0 00:30:44.863 12:14:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:30:44.863 12:14:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.863 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.863 12:14:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.863 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.863 12:14:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.863 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:44.863 12:14:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.863 12:14:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:44.863 12:14:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:30:44.863 [2024-11-29 12:14:50.335593] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:30:44.863 12:14:50 -- common/autotest_common.sh@653 -- # es=22 00:30:44.863 12:14:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:44.863 12:14:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:44.863 12:14:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:44.863 00:30:44.863 real 0m0.090s 00:30:44.863 user 0m0.045s 00:30:44.863 sys 0m0.044s 00:30:45.122 12:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.122 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.122 ************************************ 00:30:45.122 END TEST dd_invalid_count 00:30:45.122 ************************************ 00:30:45.122 12:14:50 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:30:45.122 12:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.122 12:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.122 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.122 ************************************ 00:30:45.122 START TEST dd_invalid_oflag 00:30:45.122 ************************************ 00:30:45.122 12:14:50 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:30:45.122 12:14:50 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:30:45.122 12:14:50 -- common/autotest_common.sh@650 -- # local es=0 00:30:45.122 12:14:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:30:45.122 12:14:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.122 12:14:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.122 12:14:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.122 12:14:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:30:45.122 [2024-11-29 12:14:50.479388] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:30:45.122 12:14:50 -- common/autotest_common.sh@653 -- # es=22 00:30:45.122 12:14:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.122 12:14:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.122 12:14:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.122 00:30:45.122 real 0m0.091s 00:30:45.122 user 0m0.043s 00:30:45.122 sys 0m0.048s 00:30:45.122 12:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.122 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.122 ************************************ 00:30:45.122 END TEST dd_invalid_oflag 00:30:45.122 ************************************ 00:30:45.122 12:14:50 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:30:45.122 12:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.122 12:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.122 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.122 ************************************ 00:30:45.122 START TEST dd_invalid_iflag 00:30:45.122 ************************************ 00:30:45.122 12:14:50 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:30:45.122 12:14:50 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:30:45.122 12:14:50 -- common/autotest_common.sh@650 -- # local es=0 00:30:45.122 12:14:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:30:45.122 12:14:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.122 12:14:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.122 12:14:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.122 12:14:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.122 12:14:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:30:45.122 [2024-11-29 12:14:50.626231] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:30:45.381 12:14:50 -- common/autotest_common.sh@653 -- # es=22 00:30:45.381 12:14:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.381 12:14:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.381 12:14:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.381 00:30:45.381 real 0m0.093s 00:30:45.381 user 0m0.043s 00:30:45.381 sys 0m0.050s 00:30:45.381 12:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.381 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.381 ************************************ 00:30:45.381 END TEST dd_invalid_iflag 00:30:45.381 ************************************ 00:30:45.381 12:14:50 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:30:45.381 12:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.381 12:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.381 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.381 ************************************ 00:30:45.381 START TEST dd_unknown_flag 00:30:45.381 ************************************ 00:30:45.381 12:14:50 -- common/autotest_common.sh@1114 -- # unknown_flag 00:30:45.381 12:14:50 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:30:45.381 12:14:50 -- common/autotest_common.sh@650 -- # local es=0 00:30:45.381 12:14:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:30:45.381 12:14:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.381 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.381 12:14:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.381 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.381 12:14:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.381 12:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.381 12:14:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.381 12:14:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.381 12:14:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:30:45.381 [2024-11-29 12:14:50.771716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:45.381 [2024-11-29 12:14:50.771959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147087 ] 00:30:45.640 [2024-11-29 12:14:50.919175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.640 [2024-11-29 12:14:51.005181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.640 [2024-11-29 12:14:51.092507] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:30:45.640 [2024-11-29 12:14:51.092629] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:30:45.640 [2024-11-29 12:14:51.092678] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:30:45.640 [2024-11-29 12:14:51.092769] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:45.898 [2024-11-29 12:14:51.217532] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:45.898 12:14:51 -- common/autotest_common.sh@653 -- # es=236 00:30:45.898 12:14:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.898 12:14:51 -- common/autotest_common.sh@662 -- # es=108 00:30:45.898 12:14:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:45.898 12:14:51 -- common/autotest_common.sh@670 -- # es=1 00:30:45.898 12:14:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.898 00:30:45.898 real 0m0.626s 00:30:45.898 user 0m0.326s 00:30:45.898 sys 0m0.200s 00:30:45.898 12:14:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.898 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:30:45.898 ************************************ 00:30:45.898 END TEST dd_unknown_flag 00:30:45.898 ************************************ 00:30:45.898 12:14:51 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:30:45.898 12:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.898 12:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.898 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:30:45.898 ************************************ 00:30:45.898 START TEST dd_invalid_json 00:30:45.898 ************************************ 00:30:45.898 12:14:51 -- common/autotest_common.sh@1114 -- # invalid_json 00:30:45.898 12:14:51 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:45.898 12:14:51 -- common/autotest_common.sh@650 -- # local es=0 00:30:45.898 12:14:51 -- dd/negative_dd.sh@95 -- # : 00:30:45.898 12:14:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:45.898 12:14:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.898 12:14:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.898 12:14:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.898 12:14:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.898 12:14:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.898 12:14:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.898 12:14:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.898 12:14:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.898 12:14:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:46.156 [2024-11-29 12:14:51.444179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:46.156 [2024-11-29 12:14:51.444781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147129 ] 00:30:46.156 [2024-11-29 12:14:51.586783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.414 [2024-11-29 12:14:51.674832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.414 [2024-11-29 12:14:51.675148] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:30:46.414 [2024-11-29 12:14:51.675453] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:46.414 [2024-11-29 12:14:51.675596] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:46.414 12:14:51 -- common/autotest_common.sh@653 -- # es=234 00:30:46.414 12:14:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:46.414 12:14:51 -- common/autotest_common.sh@662 -- # es=106 00:30:46.414 12:14:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:46.414 12:14:51 -- common/autotest_common.sh@670 -- # es=1 00:30:46.414 12:14:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:46.414 00:30:46.414 real 0m0.405s 00:30:46.414 user 0m0.219s 00:30:46.414 sys 0m0.087s 00:30:46.414 12:14:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:46.414 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.414 ************************************ 00:30:46.414 END TEST dd_invalid_json 00:30:46.414 ************************************ 00:30:46.414 00:30:46.414 real 0m3.384s 00:30:46.414 user 0m1.851s 00:30:46.414 sys 0m1.211s 00:30:46.414 12:14:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:46.414 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.414 ************************************ 00:30:46.414 END TEST spdk_dd_negative 00:30:46.414 ************************************ 00:30:46.414 00:30:46.414 real 1m12.733s 00:30:46.414 user 0m44.147s 00:30:46.414 sys 0m18.482s 00:30:46.414 12:14:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:46.414 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.414 ************************************ 00:30:46.414 END TEST spdk_dd 00:30:46.414 ************************************ 00:30:46.414 12:14:51 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:30:46.414 12:14:51 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:46.414 12:14:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:46.414 12:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.414 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.673 ************************************ 00:30:46.673 START TEST blockdev_nvme 00:30:46.673 ************************************ 00:30:46.673 12:14:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:46.673 * Looking for test storage... 00:30:46.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:46.673 12:14:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:46.673 12:14:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:46.673 12:14:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:46.673 12:14:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:46.673 12:14:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:46.673 12:14:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:46.673 12:14:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:46.673 12:14:52 -- scripts/common.sh@335 -- # IFS=.-: 00:30:46.673 12:14:52 -- scripts/common.sh@335 -- # read -ra ver1 00:30:46.673 12:14:52 -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.673 12:14:52 -- scripts/common.sh@336 -- # read -ra ver2 00:30:46.673 12:14:52 -- scripts/common.sh@337 -- # local 'op=<' 00:30:46.673 12:14:52 -- scripts/common.sh@339 -- # ver1_l=2 00:30:46.673 12:14:52 -- scripts/common.sh@340 -- # ver2_l=1 00:30:46.673 12:14:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:46.673 12:14:52 -- scripts/common.sh@343 -- # case "$op" in 00:30:46.673 12:14:52 -- scripts/common.sh@344 -- # : 1 00:30:46.673 12:14:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:46.673 12:14:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.673 12:14:52 -- scripts/common.sh@364 -- # decimal 1 00:30:46.673 12:14:52 -- scripts/common.sh@352 -- # local d=1 00:30:46.673 12:14:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.673 12:14:52 -- scripts/common.sh@354 -- # echo 1 00:30:46.673 12:14:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:46.673 12:14:52 -- scripts/common.sh@365 -- # decimal 2 00:30:46.673 12:14:52 -- scripts/common.sh@352 -- # local d=2 00:30:46.673 12:14:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.673 12:14:52 -- scripts/common.sh@354 -- # echo 2 00:30:46.673 12:14:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:46.673 12:14:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:46.673 12:14:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:46.673 12:14:52 -- scripts/common.sh@367 -- # return 0 00:30:46.673 12:14:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.673 12:14:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.673 --rc genhtml_branch_coverage=1 00:30:46.673 --rc genhtml_function_coverage=1 00:30:46.673 --rc genhtml_legend=1 00:30:46.673 --rc geninfo_all_blocks=1 00:30:46.673 --rc geninfo_unexecuted_blocks=1 00:30:46.673 00:30:46.673 ' 00:30:46.673 12:14:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.673 --rc genhtml_branch_coverage=1 00:30:46.673 --rc genhtml_function_coverage=1 00:30:46.673 --rc genhtml_legend=1 00:30:46.673 --rc geninfo_all_blocks=1 00:30:46.673 --rc geninfo_unexecuted_blocks=1 00:30:46.673 00:30:46.673 ' 00:30:46.673 12:14:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.673 --rc genhtml_branch_coverage=1 00:30:46.673 --rc genhtml_function_coverage=1 00:30:46.673 --rc genhtml_legend=1 00:30:46.673 --rc geninfo_all_blocks=1 00:30:46.673 --rc geninfo_unexecuted_blocks=1 00:30:46.673 00:30:46.673 ' 00:30:46.673 12:14:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.673 --rc genhtml_branch_coverage=1 00:30:46.673 --rc genhtml_function_coverage=1 00:30:46.673 --rc genhtml_legend=1 00:30:46.673 --rc geninfo_all_blocks=1 00:30:46.673 --rc geninfo_unexecuted_blocks=1 00:30:46.673 00:30:46.673 ' 00:30:46.673 12:14:52 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:46.673 12:14:52 -- bdev/nbd_common.sh@6 -- # set -e 00:30:46.673 12:14:52 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:46.673 12:14:52 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:46.673 12:14:52 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:46.673 12:14:52 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:46.673 12:14:52 -- bdev/blockdev.sh@18 -- # : 00:30:46.673 12:14:52 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:30:46.673 12:14:52 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:30:46.673 12:14:52 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:30:46.673 12:14:52 -- bdev/blockdev.sh@672 -- # uname -s 00:30:46.673 12:14:52 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:30:46.673 12:14:52 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:30:46.673 12:14:52 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:30:46.673 12:14:52 -- bdev/blockdev.sh@681 -- # crypto_device= 00:30:46.673 12:14:52 -- bdev/blockdev.sh@682 -- # dek= 00:30:46.673 12:14:52 -- bdev/blockdev.sh@683 -- # env_ctx= 00:30:46.673 12:14:52 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:30:46.673 12:14:52 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:30:46.673 12:14:52 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:30:46.673 12:14:52 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:30:46.673 12:14:52 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:30:46.673 12:14:52 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147225 00:30:46.673 12:14:52 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:46.673 12:14:52 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:46.673 12:14:52 -- bdev/blockdev.sh@47 -- # waitforlisten 147225 00:30:46.673 12:14:52 -- common/autotest_common.sh@829 -- # '[' -z 147225 ']' 00:30:46.673 12:14:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.673 12:14:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.673 12:14:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.673 12:14:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.673 12:14:52 -- common/autotest_common.sh@10 -- # set +x 00:30:46.673 [2024-11-29 12:14:52.173439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:46.673 [2024-11-29 12:14:52.173690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147225 ] 00:30:46.932 [2024-11-29 12:14:52.318585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.932 [2024-11-29 12:14:52.410304] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:46.932 [2024-11-29 12:14:52.410682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.867 12:14:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.867 12:14:53 -- common/autotest_common.sh@862 -- # return 0 00:30:47.867 12:14:53 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:30:47.867 12:14:53 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:30:47.867 12:14:53 -- bdev/blockdev.sh@79 -- # local json 00:30:47.867 12:14:53 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:30:47.867 12:14:53 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:47.867 12:14:53 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:30:47.867 12:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.867 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 12:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.867 12:14:53 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:30:47.867 12:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.867 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 12:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.867 12:14:53 -- bdev/blockdev.sh@738 -- # cat 00:30:47.867 12:14:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:30:47.867 12:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.867 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 12:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.867 12:14:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:30:47.867 12:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.867 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 12:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.867 12:14:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:47.867 12:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.867 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 12:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.867 12:14:53 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:30:47.867 12:14:53 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:30:47.868 12:14:53 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:30:47.868 12:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.868 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 12:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.868 12:14:53 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:30:47.868 12:14:53 -- bdev/blockdev.sh@747 -- # jq -r .name 00:30:47.868 12:14:53 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9a6c12c7-ae38-4f37-9e33-52a35e673475"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9a6c12c7-ae38-4f37-9e33-52a35e673475",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:30:47.868 12:14:53 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:30:47.868 12:14:53 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:30:47.868 12:14:53 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:30:47.868 12:14:53 -- bdev/blockdev.sh@752 -- # killprocess 147225 00:30:47.868 12:14:53 -- common/autotest_common.sh@936 -- # '[' -z 147225 ']' 00:30:47.868 12:14:53 -- common/autotest_common.sh@940 -- # kill -0 147225 00:30:47.868 12:14:53 -- common/autotest_common.sh@941 -- # uname 00:30:47.868 12:14:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:47.868 12:14:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147225 00:30:47.868 12:14:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:47.868 12:14:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:47.868 killing process with pid 147225 00:30:47.868 12:14:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147225' 00:30:47.868 12:14:53 -- common/autotest_common.sh@955 -- # kill 147225 00:30:47.868 12:14:53 -- common/autotest_common.sh@960 -- # wait 147225 00:30:48.435 12:14:53 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:48.435 12:14:53 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:48.435 12:14:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:30:48.435 12:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:48.435 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.435 ************************************ 00:30:48.435 START TEST bdev_hello_world 00:30:48.435 ************************************ 00:30:48.435 12:14:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:48.435 [2024-11-29 12:14:53.886156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:48.435 [2024-11-29 12:14:53.886413] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147283 ] 00:30:48.693 [2024-11-29 12:14:54.027266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.694 [2024-11-29 12:14:54.122232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.951 [2024-11-29 12:14:54.339631] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:48.951 [2024-11-29 12:14:54.339740] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:30:48.951 [2024-11-29 12:14:54.339816] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:48.951 [2024-11-29 12:14:54.342408] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:48.951 [2024-11-29 12:14:54.343016] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:48.951 [2024-11-29 12:14:54.343089] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:48.951 [2024-11-29 12:14:54.343415] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:48.951 00:30:48.951 [2024-11-29 12:14:54.343508] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:49.208 00:30:49.209 real 0m0.763s 00:30:49.209 user 0m0.495s 00:30:49.209 sys 0m0.169s 00:30:49.209 12:14:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:49.209 12:14:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.209 ************************************ 00:30:49.209 END TEST bdev_hello_world 00:30:49.209 ************************************ 00:30:49.209 12:14:54 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:49.209 12:14:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:49.209 12:14:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.209 12:14:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.209 ************************************ 00:30:49.209 START TEST bdev_bounds 00:30:49.209 ************************************ 00:30:49.209 12:14:54 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:30:49.209 12:14:54 -- bdev/blockdev.sh@288 -- # bdevio_pid=147321 00:30:49.209 12:14:54 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:49.209 12:14:54 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:49.209 12:14:54 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 147321' 00:30:49.209 Process bdevio pid: 147321 00:30:49.209 12:14:54 -- bdev/blockdev.sh@291 -- # waitforlisten 147321 00:30:49.209 12:14:54 -- common/autotest_common.sh@829 -- # '[' -z 147321 ']' 00:30:49.209 12:14:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.209 12:14:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:49.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.209 12:14:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.209 12:14:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:49.209 12:14:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.209 [2024-11-29 12:14:54.710629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:49.209 [2024-11-29 12:14:54.710883] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147321 ] 00:30:49.466 [2024-11-29 12:14:54.878155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:49.723 [2024-11-29 12:14:54.981952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.723 [2024-11-29 12:14:54.982099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.723 [2024-11-29 12:14:54.982106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.289 12:14:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:50.289 12:14:55 -- common/autotest_common.sh@862 -- # return 0 00:30:50.289 12:14:55 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:50.548 I/O targets: 00:30:50.548 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:30:50.548 00:30:50.548 00:30:50.548 CUnit - A unit testing framework for C - Version 2.1-3 00:30:50.548 http://cunit.sourceforge.net/ 00:30:50.548 00:30:50.548 00:30:50.548 Suite: bdevio tests on: Nvme0n1 00:30:50.548 Test: blockdev write read block ...passed 00:30:50.548 Test: blockdev write zeroes read block ...passed 00:30:50.548 Test: blockdev write zeroes read no split ...passed 00:30:50.548 Test: blockdev write zeroes read split ...passed 00:30:50.548 Test: blockdev write zeroes read split partial ...passed 00:30:50.548 Test: blockdev reset ...[2024-11-29 12:14:55.829138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:50.548 [2024-11-29 12:14:55.831645] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.548 passed 00:30:50.548 Test: blockdev write read 8 blocks ...passed 00:30:50.548 Test: blockdev write read size > 128k ...passed 00:30:50.548 Test: blockdev write read invalid size ...passed 00:30:50.548 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:50.548 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:50.548 Test: blockdev write read max offset ...passed 00:30:50.548 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:50.548 Test: blockdev writev readv 8 blocks ...passed 00:30:50.548 Test: blockdev writev readv 30 x 1block ...passed 00:30:50.548 Test: blockdev writev readv block ...passed 00:30:50.548 Test: blockdev writev readv size > 128k ...passed 00:30:50.548 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:50.548 Test: blockdev comparev and writev ...[2024-11-29 12:14:55.838973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x7620d000 len:0x1000 00:30:50.548 [2024-11-29 12:14:55.839210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:50.548 passed 00:30:50.548 Test: blockdev nvme passthru rw ...passed 00:30:50.548 Test: blockdev nvme passthru vendor specific ...[2024-11-29 12:14:55.840389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:50.548 [2024-11-29 12:14:55.840567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:50.548 passed 00:30:50.548 Test: blockdev nvme admin passthru ...passed 00:30:50.548 Test: blockdev copy ...passed 00:30:50.548 00:30:50.548 Run Summary: Type Total Ran Passed Failed Inactive 00:30:50.548 suites 1 1 n/a 0 0 00:30:50.548 tests 23 23 23 0 0 00:30:50.548 asserts 152 152 152 0 n/a 00:30:50.548 00:30:50.548 Elapsed time = 0.086 seconds 00:30:50.548 0 00:30:50.548 12:14:55 -- bdev/blockdev.sh@293 -- # killprocess 147321 00:30:50.548 12:14:55 -- common/autotest_common.sh@936 -- # '[' -z 147321 ']' 00:30:50.548 12:14:55 -- common/autotest_common.sh@940 -- # kill -0 147321 00:30:50.548 12:14:55 -- common/autotest_common.sh@941 -- # uname 00:30:50.548 12:14:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:50.548 12:14:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147321 00:30:50.548 killing process with pid 147321 00:30:50.548 12:14:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:50.548 12:14:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:50.548 12:14:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147321' 00:30:50.548 12:14:55 -- common/autotest_common.sh@955 -- # kill 147321 00:30:50.548 12:14:55 -- common/autotest_common.sh@960 -- # wait 147321 00:30:50.805 ************************************ 00:30:50.805 END TEST bdev_bounds 00:30:50.805 ************************************ 00:30:50.805 12:14:56 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:50.805 00:30:50.805 real 0m1.482s 00:30:50.805 user 0m3.765s 00:30:50.805 sys 0m0.315s 00:30:50.805 12:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:50.805 12:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:50.805 12:14:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:30:50.805 12:14:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:50.805 12:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:50.805 12:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:50.805 ************************************ 00:30:50.805 START TEST bdev_nbd 00:30:50.805 ************************************ 00:30:50.805 12:14:56 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:30:50.805 12:14:56 -- bdev/blockdev.sh@298 -- # uname -s 00:30:50.805 12:14:56 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:50.805 12:14:56 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:50.805 12:14:56 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:50.805 12:14:56 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:30:50.805 12:14:56 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:50.805 12:14:56 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:30:50.805 12:14:56 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:50.805 12:14:56 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:50.805 12:14:56 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:50.805 12:14:56 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:30:50.805 12:14:56 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:30:50.805 12:14:56 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:50.805 12:14:56 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:30:50.805 12:14:56 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:50.805 12:14:56 -- bdev/blockdev.sh@316 -- # nbd_pid=147378 00:30:50.805 12:14:56 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:50.805 12:14:56 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:50.805 12:14:56 -- bdev/blockdev.sh@318 -- # waitforlisten 147378 /var/tmp/spdk-nbd.sock 00:30:50.805 12:14:56 -- common/autotest_common.sh@829 -- # '[' -z 147378 ']' 00:30:50.805 12:14:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:50.805 12:14:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.805 12:14:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:50.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:50.805 12:14:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.805 12:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:50.805 [2024-11-29 12:14:56.235274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:50.805 [2024-11-29 12:14:56.235715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.062 [2024-11-29 12:14:56.375706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.062 [2024-11-29 12:14:56.483394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.993 12:14:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.993 12:14:57 -- common/autotest_common.sh@862 -- # return 0 00:30:51.993 12:14:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@24 -- # local i 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:30:51.993 12:14:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:30:52.251 12:14:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:52.251 12:14:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:52.251 12:14:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:52.251 12:14:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:52.251 12:14:57 -- common/autotest_common.sh@867 -- # local i 00:30:52.251 12:14:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:52.251 12:14:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:52.251 12:14:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:52.251 12:14:57 -- common/autotest_common.sh@871 -- # break 00:30:52.251 12:14:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:52.251 12:14:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:52.251 12:14:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:52.251 1+0 records in 00:30:52.251 1+0 records out 00:30:52.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606088 s, 6.8 MB/s 00:30:52.251 12:14:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:52.251 12:14:57 -- common/autotest_common.sh@884 -- # size=4096 00:30:52.251 12:14:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:52.251 12:14:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:52.251 12:14:57 -- common/autotest_common.sh@887 -- # return 0 00:30:52.251 12:14:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:52.251 12:14:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:30:52.251 12:14:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:52.509 { 00:30:52.509 "nbd_device": "/dev/nbd0", 00:30:52.509 "bdev_name": "Nvme0n1" 00:30:52.509 } 00:30:52.509 ]' 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:52.509 { 00:30:52.509 "nbd_device": "/dev/nbd0", 00:30:52.509 "bdev_name": "Nvme0n1" 00:30:52.509 } 00:30:52.509 ]' 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@51 -- # local i 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.509 12:14:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@41 -- # break 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:52.767 12:14:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:53.024 12:14:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@65 -- # true 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@65 -- # count=0 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@122 -- # count=0 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@127 -- # return 0 00:30:53.025 12:14:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@12 -- # local i 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:53.025 12:14:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:30:53.283 /dev/nbd0 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:53.283 12:14:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:53.283 12:14:58 -- common/autotest_common.sh@867 -- # local i 00:30:53.283 12:14:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:53.283 12:14:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:53.283 12:14:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:53.283 12:14:58 -- common/autotest_common.sh@871 -- # break 00:30:53.283 12:14:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:53.283 12:14:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:53.283 12:14:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:53.283 1+0 records in 00:30:53.283 1+0 records out 00:30:53.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481707 s, 8.5 MB/s 00:30:53.283 12:14:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:53.283 12:14:58 -- common/autotest_common.sh@884 -- # size=4096 00:30:53.283 12:14:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:53.283 12:14:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:53.283 12:14:58 -- common/autotest_common.sh@887 -- # return 0 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:53.283 12:14:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:53.541 12:14:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:53.541 { 00:30:53.541 "nbd_device": "/dev/nbd0", 00:30:53.541 "bdev_name": "Nvme0n1" 00:30:53.541 } 00:30:53.541 ]' 00:30:53.541 12:14:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:53.541 12:14:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:53.541 { 00:30:53.541 "nbd_device": "/dev/nbd0", 00:30:53.541 "bdev_name": "Nvme0n1" 00:30:53.541 } 00:30:53.541 ]' 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@65 -- # count=1 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@66 -- # echo 1 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@95 -- # count=1 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:53.541 256+0 records in 00:30:53.541 256+0 records out 00:30:53.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669767 s, 157 MB/s 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:53.541 12:14:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:53.799 256+0 records in 00:30:53.799 256+0 records out 00:30:53.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741232 s, 14.1 MB/s 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@51 -- # local i 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:53.799 12:14:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@41 -- # break 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@45 -- # return 0 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:54.057 12:14:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:54.314 12:14:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@65 -- # true 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@65 -- # count=0 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@104 -- # count=0 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@109 -- # return 0 00:30:54.315 12:14:59 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:54.315 12:14:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:54.573 malloc_lvol_verify 00:30:54.573 12:14:59 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:54.831 6940c7dd-32f3-43bf-b826-a17e65bdf679 00:30:54.831 12:15:00 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:55.089 61b3e194-19c1-476b-bd4c-217673088b1c 00:30:55.089 12:15:00 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:55.347 /dev/nbd0 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:55.347 mke2fs 1.46.5 (30-Dec-2021) 00:30:55.347 00:30:55.347 Filesystem too small for a journal 00:30:55.347 Discarding device blocks: 0/1024 done 00:30:55.347 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:55.347 00:30:55.347 Allocating group tables: 0/1 done 00:30:55.347 Writing inode tables: 0/1 done 00:30:55.347 Writing superblocks and filesystem accounting information: 0/1 done 00:30:55.347 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@51 -- # local i 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:55.347 12:15:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@41 -- # break 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@45 -- # return 0 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:55.606 12:15:00 -- bdev/nbd_common.sh@147 -- # return 0 00:30:55.606 12:15:00 -- bdev/blockdev.sh@324 -- # killprocess 147378 00:30:55.606 12:15:00 -- common/autotest_common.sh@936 -- # '[' -z 147378 ']' 00:30:55.606 12:15:00 -- common/autotest_common.sh@940 -- # kill -0 147378 00:30:55.606 12:15:00 -- common/autotest_common.sh@941 -- # uname 00:30:55.606 12:15:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:55.606 12:15:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147378 00:30:55.606 12:15:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:55.606 killing process with pid 147378 00:30:55.606 12:15:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:55.606 12:15:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147378' 00:30:55.606 12:15:00 -- common/autotest_common.sh@955 -- # kill 147378 00:30:55.606 12:15:00 -- common/autotest_common.sh@960 -- # wait 147378 00:30:55.864 12:15:01 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:55.864 00:30:55.864 real 0m5.039s 00:30:55.864 user 0m7.680s 00:30:55.864 sys 0m1.232s 00:30:55.864 12:15:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:55.864 ************************************ 00:30:55.864 END TEST bdev_nbd 00:30:55.864 12:15:01 -- common/autotest_common.sh@10 -- # set +x 00:30:55.864 ************************************ 00:30:55.864 12:15:01 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:55.864 12:15:01 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:30:55.864 skipping fio tests on NVMe due to multi-ns failures. 00:30:55.864 12:15:01 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:55.864 12:15:01 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:55.864 12:15:01 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:55.864 12:15:01 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:30:55.864 12:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:55.864 12:15:01 -- common/autotest_common.sh@10 -- # set +x 00:30:55.864 ************************************ 00:30:55.864 START TEST bdev_verify 00:30:55.864 ************************************ 00:30:55.864 12:15:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:55.864 [2024-11-29 12:15:01.331668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:55.864 [2024-11-29 12:15:01.331942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147567 ] 00:30:56.122 [2024-11-29 12:15:01.487282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:56.122 [2024-11-29 12:15:01.585859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.122 [2024-11-29 12:15:01.585871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.381 Running I/O for 5 seconds... 00:31:01.671 00:31:01.671 Latency(us) 00:31:01.671 [2024-11-29T12:15:07.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.671 [2024-11-29T12:15:07.182Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:01.671 Verification LBA range: start 0x0 length 0xa0000 00:31:01.671 Nvme0n1 : 5.01 18579.40 72.58 0.00 0.00 6857.31 325.82 15966.95 00:31:01.671 [2024-11-29T12:15:07.182Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:01.671 Verification LBA range: start 0xa0000 length 0xa0000 00:31:01.671 Nvme0n1 : 5.01 18595.69 72.64 0.00 0.00 6851.81 443.11 17396.83 00:31:01.671 [2024-11-29T12:15:07.182Z] =================================================================================================================== 00:31:01.671 [2024-11-29T12:15:07.182Z] Total : 37175.09 145.22 0.00 0.00 6854.55 325.82 17396.83 00:31:11.649 00:31:11.649 real 0m14.814s 00:31:11.649 user 0m28.766s 00:31:11.649 sys 0m0.313s 00:31:11.649 12:15:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:11.649 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.649 ************************************ 00:31:11.649 END TEST bdev_verify 00:31:11.649 ************************************ 00:31:11.649 12:15:16 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:11.649 12:15:16 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:11.649 12:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:11.649 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.649 ************************************ 00:31:11.649 START TEST bdev_verify_big_io 00:31:11.649 ************************************ 00:31:11.649 12:15:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:11.649 [2024-11-29 12:15:16.201171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:11.649 [2024-11-29 12:15:16.201379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147754 ] 00:31:11.649 [2024-11-29 12:15:16.351332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:11.649 [2024-11-29 12:15:16.441177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.649 [2024-11-29 12:15:16.441184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.649 Running I/O for 5 seconds... 00:31:16.915 00:31:16.915 Latency(us) 00:31:16.915 [2024-11-29T12:15:22.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.915 [2024-11-29T12:15:22.426Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:16.915 Verification LBA range: start 0x0 length 0xa000 00:31:16.915 Nvme0n1 : 5.04 1800.76 112.55 0.00 0.00 70062.89 474.76 103427.72 00:31:16.915 [2024-11-29T12:15:22.426Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:16.915 Verification LBA range: start 0xa000 length 0xa000 00:31:16.915 Nvme0n1 : 5.04 1782.17 111.39 0.00 0.00 70797.83 476.63 121062.87 00:31:16.915 [2024-11-29T12:15:22.426Z] =================================================================================================================== 00:31:16.915 [2024-11-29T12:15:22.426Z] Total : 3582.92 223.93 0.00 0.00 70428.65 474.76 121062.87 00:31:16.915 00:31:16.915 real 0m6.181s 00:31:16.915 user 0m11.583s 00:31:16.915 sys 0m0.239s 00:31:16.915 ************************************ 00:31:16.915 END TEST bdev_verify_big_io 00:31:16.915 ************************************ 00:31:16.915 12:15:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:16.915 12:15:22 -- common/autotest_common.sh@10 -- # set +x 00:31:16.915 12:15:22 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:16.915 12:15:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:16.915 12:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:16.915 12:15:22 -- common/autotest_common.sh@10 -- # set +x 00:31:16.915 ************************************ 00:31:16.915 START TEST bdev_write_zeroes 00:31:16.915 ************************************ 00:31:16.915 12:15:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:16.915 [2024-11-29 12:15:22.422283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:16.915 [2024-11-29 12:15:22.422538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147849 ] 00:31:17.174 [2024-11-29 12:15:22.565289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.174 [2024-11-29 12:15:22.652710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.432 Running I/O for 1 seconds... 00:31:18.365 00:31:18.365 Latency(us) 00:31:18.365 [2024-11-29T12:15:23.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.365 [2024-11-29T12:15:23.876Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:18.365 Nvme0n1 : 1.00 55607.55 217.22 0.00 0.00 2296.05 707.49 10545.34 00:31:18.365 [2024-11-29T12:15:23.876Z] =================================================================================================================== 00:31:18.365 [2024-11-29T12:15:23.876Z] Total : 55607.55 217.22 0.00 0.00 2296.05 707.49 10545.34 00:31:18.932 00:31:18.932 real 0m1.764s 00:31:18.932 user 0m1.472s 00:31:18.932 sys 0m0.192s 00:31:18.932 12:15:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:18.932 ************************************ 00:31:18.932 END TEST bdev_write_zeroes 00:31:18.932 12:15:24 -- common/autotest_common.sh@10 -- # set +x 00:31:18.932 ************************************ 00:31:18.932 12:15:24 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:18.932 12:15:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:18.932 12:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:18.932 12:15:24 -- common/autotest_common.sh@10 -- # set +x 00:31:18.932 ************************************ 00:31:18.932 START TEST bdev_json_nonenclosed 00:31:18.932 ************************************ 00:31:18.932 12:15:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:18.932 [2024-11-29 12:15:24.247897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:18.932 [2024-11-29 12:15:24.248175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147894 ] 00:31:18.932 [2024-11-29 12:15:24.393030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.190 [2024-11-29 12:15:24.486134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.190 [2024-11-29 12:15:24.486394] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:19.190 [2024-11-29 12:15:24.486453] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:19.190 00:31:19.190 real 0m0.430s 00:31:19.190 user 0m0.228s 00:31:19.190 sys 0m0.103s 00:31:19.190 12:15:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:19.190 12:15:24 -- common/autotest_common.sh@10 -- # set +x 00:31:19.190 ************************************ 00:31:19.190 END TEST bdev_json_nonenclosed 00:31:19.190 ************************************ 00:31:19.190 12:15:24 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:19.190 12:15:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:19.190 12:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:19.190 12:15:24 -- common/autotest_common.sh@10 -- # set +x 00:31:19.190 ************************************ 00:31:19.190 START TEST bdev_json_nonarray 00:31:19.190 ************************************ 00:31:19.190 12:15:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:19.448 [2024-11-29 12:15:24.720249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:19.448 [2024-11-29 12:15:24.720507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147924 ] 00:31:19.448 [2024-11-29 12:15:24.868145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.448 [2024-11-29 12:15:24.960094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.448 [2024-11-29 12:15:24.960569] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:19.448 [2024-11-29 12:15:24.960768] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:19.707 00:31:19.707 real 0m0.419s 00:31:19.707 user 0m0.226s 00:31:19.707 sys 0m0.093s 00:31:19.707 12:15:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:19.707 ************************************ 00:31:19.707 END TEST bdev_json_nonarray 00:31:19.707 ************************************ 00:31:19.707 12:15:25 -- common/autotest_common.sh@10 -- # set +x 00:31:19.707 12:15:25 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:31:19.707 12:15:25 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:31:19.707 12:15:25 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:31:19.707 12:15:25 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:31:19.707 12:15:25 -- bdev/blockdev.sh@809 -- # cleanup 00:31:19.707 12:15:25 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:19.707 12:15:25 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:19.707 12:15:25 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:31:19.707 12:15:25 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:31:19.707 12:15:25 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:31:19.707 12:15:25 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:31:19.707 00:31:19.707 real 0m33.203s 00:31:19.707 user 0m56.448s 00:31:19.707 sys 0m3.284s 00:31:19.707 12:15:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:19.707 12:15:25 -- common/autotest_common.sh@10 -- # set +x 00:31:19.707 ************************************ 00:31:19.707 END TEST blockdev_nvme 00:31:19.707 ************************************ 00:31:19.707 12:15:25 -- spdk/autotest.sh@206 -- # uname -s 00:31:19.707 12:15:25 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:31:19.707 12:15:25 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:19.707 12:15:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:19.707 12:15:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:19.707 12:15:25 -- common/autotest_common.sh@10 -- # set +x 00:31:19.707 ************************************ 00:31:19.707 START TEST blockdev_nvme_gpt 00:31:19.707 ************************************ 00:31:19.707 12:15:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:19.964 * Looking for test storage... 00:31:19.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:19.964 12:15:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:19.964 12:15:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:19.964 12:15:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:19.964 12:15:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:19.964 12:15:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:19.964 12:15:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:19.964 12:15:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:19.964 12:15:25 -- scripts/common.sh@335 -- # IFS=.-: 00:31:19.964 12:15:25 -- scripts/common.sh@335 -- # read -ra ver1 00:31:19.964 12:15:25 -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.964 12:15:25 -- scripts/common.sh@336 -- # read -ra ver2 00:31:19.964 12:15:25 -- scripts/common.sh@337 -- # local 'op=<' 00:31:19.964 12:15:25 -- scripts/common.sh@339 -- # ver1_l=2 00:31:19.964 12:15:25 -- scripts/common.sh@340 -- # ver2_l=1 00:31:19.964 12:15:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:19.964 12:15:25 -- scripts/common.sh@343 -- # case "$op" in 00:31:19.964 12:15:25 -- scripts/common.sh@344 -- # : 1 00:31:19.964 12:15:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:19.964 12:15:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.964 12:15:25 -- scripts/common.sh@364 -- # decimal 1 00:31:19.964 12:15:25 -- scripts/common.sh@352 -- # local d=1 00:31:19.964 12:15:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.964 12:15:25 -- scripts/common.sh@354 -- # echo 1 00:31:19.964 12:15:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:19.964 12:15:25 -- scripts/common.sh@365 -- # decimal 2 00:31:19.964 12:15:25 -- scripts/common.sh@352 -- # local d=2 00:31:19.964 12:15:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.964 12:15:25 -- scripts/common.sh@354 -- # echo 2 00:31:19.964 12:15:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:19.964 12:15:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:19.964 12:15:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:19.964 12:15:25 -- scripts/common.sh@367 -- # return 0 00:31:19.964 12:15:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.964 12:15:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:19.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.964 --rc genhtml_branch_coverage=1 00:31:19.964 --rc genhtml_function_coverage=1 00:31:19.964 --rc genhtml_legend=1 00:31:19.964 --rc geninfo_all_blocks=1 00:31:19.964 --rc geninfo_unexecuted_blocks=1 00:31:19.964 00:31:19.964 ' 00:31:19.964 12:15:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:19.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.964 --rc genhtml_branch_coverage=1 00:31:19.964 --rc genhtml_function_coverage=1 00:31:19.964 --rc genhtml_legend=1 00:31:19.964 --rc geninfo_all_blocks=1 00:31:19.964 --rc geninfo_unexecuted_blocks=1 00:31:19.964 00:31:19.964 ' 00:31:19.964 12:15:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:19.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.964 --rc genhtml_branch_coverage=1 00:31:19.964 --rc genhtml_function_coverage=1 00:31:19.964 --rc genhtml_legend=1 00:31:19.964 --rc geninfo_all_blocks=1 00:31:19.964 --rc geninfo_unexecuted_blocks=1 00:31:19.964 00:31:19.964 ' 00:31:19.964 12:15:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:19.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.964 --rc genhtml_branch_coverage=1 00:31:19.964 --rc genhtml_function_coverage=1 00:31:19.964 --rc genhtml_legend=1 00:31:19.964 --rc geninfo_all_blocks=1 00:31:19.964 --rc geninfo_unexecuted_blocks=1 00:31:19.964 00:31:19.964 ' 00:31:19.964 12:15:25 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:19.964 12:15:25 -- bdev/nbd_common.sh@6 -- # set -e 00:31:19.964 12:15:25 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:19.964 12:15:25 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:19.964 12:15:25 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:19.964 12:15:25 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:19.964 12:15:25 -- bdev/blockdev.sh@18 -- # : 00:31:19.964 12:15:25 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:19.964 12:15:25 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:19.964 12:15:25 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:19.964 12:15:25 -- bdev/blockdev.sh@672 -- # uname -s 00:31:19.964 12:15:25 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:19.964 12:15:25 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:19.964 12:15:25 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:31:19.964 12:15:25 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:19.964 12:15:25 -- bdev/blockdev.sh@682 -- # dek= 00:31:19.964 12:15:25 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:19.964 12:15:25 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:19.964 12:15:25 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:19.964 12:15:25 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:31:19.964 12:15:25 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:31:19.964 12:15:25 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:19.964 12:15:25 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=148009 00:31:19.964 12:15:25 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:19.964 12:15:25 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:19.964 12:15:25 -- bdev/blockdev.sh@47 -- # waitforlisten 148009 00:31:19.964 12:15:25 -- common/autotest_common.sh@829 -- # '[' -z 148009 ']' 00:31:19.964 12:15:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.964 12:15:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:19.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.964 12:15:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.964 12:15:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:19.964 12:15:25 -- common/autotest_common.sh@10 -- # set +x 00:31:19.964 [2024-11-29 12:15:25.445639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:19.964 [2024-11-29 12:15:25.446440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148009 ] 00:31:20.220 [2024-11-29 12:15:25.595040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.220 [2024-11-29 12:15:25.705860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:20.220 [2024-11-29 12:15:25.706816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.154 12:15:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.154 12:15:26 -- common/autotest_common.sh@862 -- # return 0 00:31:21.154 12:15:26 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:21.154 12:15:26 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:31:21.154 12:15:26 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:21.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:21.413 Waiting for block devices as requested 00:31:21.413 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:21.413 12:15:26 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:31:21.413 12:15:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:31:21.413 12:15:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:31:21.413 12:15:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:31:21.413 12:15:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:31:21.413 12:15:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:31:21.413 12:15:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:31:21.413 12:15:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:21.413 12:15:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:31:21.413 12:15:26 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:31:21.413 12:15:26 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:31:21.413 12:15:26 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:31:21.413 12:15:26 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:31:21.413 12:15:26 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:31:21.413 12:15:26 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:31:21.413 12:15:26 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:31:21.413 12:15:26 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:31:21.413 BYT; 00:31:21.413 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:31:21.413 12:15:26 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:31:21.413 BYT; 00:31:21.413 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:31:21.413 12:15:26 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:31:21.413 12:15:26 -- bdev/blockdev.sh@114 -- # break 00:31:21.413 12:15:26 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:31:21.413 12:15:26 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:31:21.413 12:15:26 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:21.413 12:15:26 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:31:21.981 12:15:27 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:31:21.981 12:15:27 -- scripts/common.sh@410 -- # local spdk_guid 00:31:21.981 12:15:27 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:21.981 12:15:27 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:21.981 12:15:27 -- scripts/common.sh@415 -- # IFS='()' 00:31:21.981 12:15:27 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:31:21.981 12:15:27 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:21.981 12:15:27 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:31:21.981 12:15:27 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:21.982 12:15:27 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:21.982 12:15:27 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:21.982 12:15:27 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:31:21.982 12:15:27 -- scripts/common.sh@422 -- # local spdk_guid 00:31:21.982 12:15:27 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:21.982 12:15:27 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:21.982 12:15:27 -- scripts/common.sh@427 -- # IFS='()' 00:31:21.982 12:15:27 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:31:21.982 12:15:27 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:21.982 12:15:27 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:31:21.982 12:15:27 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:21.982 12:15:27 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:21.982 12:15:27 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:21.982 12:15:27 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:31:22.931 The operation has completed successfully. 00:31:22.931 12:15:28 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:31:23.864 The operation has completed successfully. 00:31:23.864 12:15:29 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:24.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:24.382 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:25.316 12:15:30 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:31:25.316 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.316 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.316 [] 00:31:25.316 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.316 12:15:30 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:31:25.316 12:15:30 -- bdev/blockdev.sh@79 -- # local json 00:31:25.316 12:15:30 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:31:25.316 12:15:30 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:25.316 12:15:30 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:31:25.316 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.316 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.316 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.316 12:15:30 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:25.316 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.316 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.316 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.316 12:15:30 -- bdev/blockdev.sh@738 -- # cat 00:31:25.316 12:15:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:25.316 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.316 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.316 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.316 12:15:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:25.316 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.316 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.575 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.575 12:15:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:25.575 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.575 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.575 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.575 12:15:30 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:25.575 12:15:30 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:25.575 12:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.575 12:15:30 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:25.575 12:15:30 -- common/autotest_common.sh@10 -- # set +x 00:31:25.575 12:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.575 12:15:30 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:25.575 12:15:30 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:25.575 12:15:30 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:31:25.575 12:15:30 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:25.575 12:15:30 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:31:25.575 12:15:30 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:25.575 12:15:30 -- bdev/blockdev.sh@752 -- # killprocess 148009 00:31:25.575 12:15:30 -- common/autotest_common.sh@936 -- # '[' -z 148009 ']' 00:31:25.575 12:15:30 -- common/autotest_common.sh@940 -- # kill -0 148009 00:31:25.575 12:15:30 -- common/autotest_common.sh@941 -- # uname 00:31:25.575 12:15:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:25.575 12:15:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148009 00:31:25.575 12:15:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:25.575 12:15:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:25.575 12:15:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148009' 00:31:25.575 killing process with pid 148009 00:31:25.575 12:15:30 -- common/autotest_common.sh@955 -- # kill 148009 00:31:25.575 12:15:30 -- common/autotest_common.sh@960 -- # wait 148009 00:31:26.143 12:15:31 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:26.143 12:15:31 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:31:26.143 12:15:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:31:26.143 12:15:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:26.143 12:15:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.143 ************************************ 00:31:26.143 START TEST bdev_hello_world 00:31:26.143 ************************************ 00:31:26.143 12:15:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:31:26.143 [2024-11-29 12:15:31.538362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:26.143 [2024-11-29 12:15:31.538649] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148427 ] 00:31:26.402 [2024-11-29 12:15:31.686934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.402 [2024-11-29 12:15:31.785904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.660 [2024-11-29 12:15:32.006099] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:26.660 [2024-11-29 12:15:32.006492] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:31:26.660 [2024-11-29 12:15:32.006695] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:26.660 [2024-11-29 12:15:32.009759] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:26.660 [2024-11-29 12:15:32.010561] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:26.660 [2024-11-29 12:15:32.010738] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:26.660 [2024-11-29 12:15:32.011077] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:26.660 00:31:26.660 [2024-11-29 12:15:32.011282] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:26.919 00:31:26.919 real 0m0.820s 00:31:26.919 user 0m0.518s 00:31:26.919 sys 0m0.201s 00:31:26.919 12:15:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:26.919 ************************************ 00:31:26.919 END TEST bdev_hello_world 00:31:26.919 ************************************ 00:31:26.919 12:15:32 -- common/autotest_common.sh@10 -- # set +x 00:31:26.919 12:15:32 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:26.919 12:15:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:26.919 12:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:26.919 12:15:32 -- common/autotest_common.sh@10 -- # set +x 00:31:26.919 ************************************ 00:31:26.919 START TEST bdev_bounds 00:31:26.919 ************************************ 00:31:26.920 12:15:32 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:31:26.920 12:15:32 -- bdev/blockdev.sh@288 -- # bdevio_pid=148458 00:31:26.920 12:15:32 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:26.920 12:15:32 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:26.920 Process bdevio pid: 148458 00:31:26.920 12:15:32 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 148458' 00:31:26.920 12:15:32 -- bdev/blockdev.sh@291 -- # waitforlisten 148458 00:31:26.920 12:15:32 -- common/autotest_common.sh@829 -- # '[' -z 148458 ']' 00:31:26.920 12:15:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.920 12:15:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.920 12:15:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.920 12:15:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.920 12:15:32 -- common/autotest_common.sh@10 -- # set +x 00:31:26.920 [2024-11-29 12:15:32.413996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:26.920 [2024-11-29 12:15:32.414239] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148458 ] 00:31:27.179 [2024-11-29 12:15:32.576335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:27.179 [2024-11-29 12:15:32.683737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.179 [2024-11-29 12:15:32.683863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.179 [2024-11-29 12:15:32.683849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.115 12:15:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.115 12:15:33 -- common/autotest_common.sh@862 -- # return 0 00:31:28.115 12:15:33 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:28.115 I/O targets: 00:31:28.115 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:31:28.115 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:31:28.115 00:31:28.115 00:31:28.115 CUnit - A unit testing framework for C - Version 2.1-3 00:31:28.115 http://cunit.sourceforge.net/ 00:31:28.115 00:31:28.115 00:31:28.115 Suite: bdevio tests on: Nvme0n1p2 00:31:28.115 Test: blockdev write read block ...passed 00:31:28.115 Test: blockdev write zeroes read block ...passed 00:31:28.115 Test: blockdev write zeroes read no split ...passed 00:31:28.115 Test: blockdev write zeroes read split ...passed 00:31:28.115 Test: blockdev write zeroes read split partial ...passed 00:31:28.115 Test: blockdev reset ...[2024-11-29 12:15:33.546549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:28.115 [2024-11-29 12:15:33.549277] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:28.115 passed 00:31:28.115 Test: blockdev write read 8 blocks ...passed 00:31:28.115 Test: blockdev write read size > 128k ...passed 00:31:28.115 Test: blockdev write read invalid size ...passed 00:31:28.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:28.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:28.115 Test: blockdev write read max offset ...passed 00:31:28.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:28.115 Test: blockdev writev readv 8 blocks ...passed 00:31:28.115 Test: blockdev writev readv 30 x 1block ...passed 00:31:28.115 Test: blockdev writev readv block ...passed 00:31:28.115 Test: blockdev writev readv size > 128k ...passed 00:31:28.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:28.115 Test: blockdev comparev and writev ...[2024-11-29 12:15:33.555800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x87a0b000 len:0x1000 00:31:28.115 [2024-11-29 12:15:33.555891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:28.115 passed 00:31:28.115 Test: blockdev nvme passthru rw ...passed 00:31:28.115 Test: blockdev nvme passthru vendor specific ...passed 00:31:28.115 Test: blockdev nvme admin passthru ...passed 00:31:28.115 Test: blockdev copy ...passed 00:31:28.115 Suite: bdevio tests on: Nvme0n1p1 00:31:28.115 Test: blockdev write read block ...passed 00:31:28.115 Test: blockdev write zeroes read block ...passed 00:31:28.115 Test: blockdev write zeroes read no split ...passed 00:31:28.115 Test: blockdev write zeroes read split ...passed 00:31:28.115 Test: blockdev write zeroes read split partial ...passed 00:31:28.115 Test: blockdev reset ...[2024-11-29 12:15:33.571020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:28.115 [2024-11-29 12:15:33.573510] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:28.115 passed 00:31:28.115 Test: blockdev write read 8 blocks ...passed 00:31:28.115 Test: blockdev write read size > 128k ...passed 00:31:28.115 Test: blockdev write read invalid size ...passed 00:31:28.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:28.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:28.115 Test: blockdev write read max offset ...passed 00:31:28.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:28.115 Test: blockdev writev readv 8 blocks ...passed 00:31:28.116 Test: blockdev writev readv 30 x 1block ...passed 00:31:28.116 Test: blockdev writev readv block ...passed 00:31:28.116 Test: blockdev writev readv size > 128k ...passed 00:31:28.116 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:28.116 Test: blockdev comparev and writev ...[2024-11-29 12:15:33.580025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x87a0d000 len:0x1000 00:31:28.116 [2024-11-29 12:15:33.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:28.116 passed 00:31:28.116 Test: blockdev nvme passthru rw ...passed 00:31:28.116 Test: blockdev nvme passthru vendor specific ...passed 00:31:28.116 Test: blockdev nvme admin passthru ...passed 00:31:28.116 Test: blockdev copy ...passed 00:31:28.116 00:31:28.116 Run Summary: Type Total Ran Passed Failed Inactive 00:31:28.116 suites 2 2 n/a 0 0 00:31:28.116 tests 46 46 46 0 0 00:31:28.116 asserts 284 284 284 0 n/a 00:31:28.116 00:31:28.116 Elapsed time = 0.114 seconds 00:31:28.116 0 00:31:28.116 12:15:33 -- bdev/blockdev.sh@293 -- # killprocess 148458 00:31:28.116 12:15:33 -- common/autotest_common.sh@936 -- # '[' -z 148458 ']' 00:31:28.116 12:15:33 -- common/autotest_common.sh@940 -- # kill -0 148458 00:31:28.116 12:15:33 -- common/autotest_common.sh@941 -- # uname 00:31:28.116 12:15:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:28.116 12:15:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148458 00:31:28.116 12:15:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:28.116 killing process with pid 148458 00:31:28.116 12:15:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:28.116 12:15:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148458' 00:31:28.116 12:15:33 -- common/autotest_common.sh@955 -- # kill 148458 00:31:28.116 12:15:33 -- common/autotest_common.sh@960 -- # wait 148458 00:31:28.374 12:15:33 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:28.374 00:31:28.374 real 0m1.490s 00:31:28.374 user 0m3.761s 00:31:28.374 sys 0m0.367s 00:31:28.374 ************************************ 00:31:28.374 END TEST bdev_bounds 00:31:28.374 ************************************ 00:31:28.374 12:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:28.374 12:15:33 -- common/autotest_common.sh@10 -- # set +x 00:31:28.633 12:15:33 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:31:28.633 12:15:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:28.633 12:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:28.633 12:15:33 -- common/autotest_common.sh@10 -- # set +x 00:31:28.633 ************************************ 00:31:28.633 START TEST bdev_nbd 00:31:28.633 ************************************ 00:31:28.633 12:15:33 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:31:28.633 12:15:33 -- bdev/blockdev.sh@298 -- # uname -s 00:31:28.633 12:15:33 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:28.633 12:15:33 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:28.633 12:15:33 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:28.633 12:15:33 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:31:28.633 12:15:33 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:28.633 12:15:33 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:31:28.633 12:15:33 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:28.633 12:15:33 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:28.633 12:15:33 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:28.633 12:15:33 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:31:28.633 12:15:33 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:28.633 12:15:33 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:28.633 12:15:33 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:28.633 12:15:33 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:28.633 12:15:33 -- bdev/blockdev.sh@316 -- # nbd_pid=148514 00:31:28.633 12:15:33 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:28.633 12:15:33 -- bdev/blockdev.sh@318 -- # waitforlisten 148514 /var/tmp/spdk-nbd.sock 00:31:28.633 12:15:33 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:28.633 12:15:33 -- common/autotest_common.sh@829 -- # '[' -z 148514 ']' 00:31:28.633 12:15:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:28.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:28.633 12:15:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.633 12:15:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:28.633 12:15:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.633 12:15:33 -- common/autotest_common.sh@10 -- # set +x 00:31:28.633 [2024-11-29 12:15:33.972771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:28.633 [2024-11-29 12:15:33.973070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.633 [2024-11-29 12:15:34.121575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.891 [2024-11-29 12:15:34.220531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.457 12:15:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:29.457 12:15:34 -- common/autotest_common.sh@862 -- # return 0 00:31:29.457 12:15:34 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@24 -- # local i 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:29.457 12:15:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:31:30.069 12:15:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:30.069 12:15:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:30.069 12:15:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:30.069 12:15:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:30.069 12:15:35 -- common/autotest_common.sh@867 -- # local i 00:31:30.069 12:15:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:30.069 12:15:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:30.069 12:15:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:30.069 12:15:35 -- common/autotest_common.sh@871 -- # break 00:31:30.069 12:15:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:30.069 12:15:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:30.069 12:15:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.069 1+0 records in 00:31:30.069 1+0 records out 00:31:30.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821647 s, 5.0 MB/s 00:31:30.069 12:15:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.069 12:15:35 -- common/autotest_common.sh@884 -- # size=4096 00:31:30.069 12:15:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.069 12:15:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:30.069 12:15:35 -- common/autotest_common.sh@887 -- # return 0 00:31:30.069 12:15:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:30.069 12:15:35 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:30.069 12:15:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:31:30.327 12:15:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:30.327 12:15:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:30.327 12:15:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:30.327 12:15:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:30.327 12:15:35 -- common/autotest_common.sh@867 -- # local i 00:31:30.327 12:15:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:30.327 12:15:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:30.327 12:15:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:30.327 12:15:35 -- common/autotest_common.sh@871 -- # break 00:31:30.327 12:15:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:30.327 12:15:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:30.327 12:15:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.327 1+0 records in 00:31:30.327 1+0 records out 00:31:30.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00533809 s, 767 kB/s 00:31:30.327 12:15:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.327 12:15:35 -- common/autotest_common.sh@884 -- # size=4096 00:31:30.327 12:15:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.327 12:15:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:30.327 12:15:35 -- common/autotest_common.sh@887 -- # return 0 00:31:30.327 12:15:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:30.327 12:15:35 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:30.327 12:15:35 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:30.585 { 00:31:30.585 "nbd_device": "/dev/nbd0", 00:31:30.585 "bdev_name": "Nvme0n1p1" 00:31:30.585 }, 00:31:30.585 { 00:31:30.585 "nbd_device": "/dev/nbd1", 00:31:30.585 "bdev_name": "Nvme0n1p2" 00:31:30.585 } 00:31:30.585 ]' 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:30.585 { 00:31:30.585 "nbd_device": "/dev/nbd0", 00:31:30.585 "bdev_name": "Nvme0n1p1" 00:31:30.585 }, 00:31:30.585 { 00:31:30.585 "nbd_device": "/dev/nbd1", 00:31:30.585 "bdev_name": "Nvme0n1p2" 00:31:30.585 } 00:31:30.585 ]' 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@51 -- # local i 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.585 12:15:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@41 -- # break 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.844 12:15:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@41 -- # break 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@45 -- # return 0 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.102 12:15:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@65 -- # true 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@65 -- # count=0 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@122 -- # count=0 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@127 -- # return 0 00:31:31.360 12:15:36 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@12 -- # local i 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:31.360 12:15:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:31:31.618 /dev/nbd0 00:31:31.618 12:15:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:31.618 12:15:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:31.618 12:15:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:31.618 12:15:37 -- common/autotest_common.sh@867 -- # local i 00:31:31.618 12:15:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:31.618 12:15:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:31.618 12:15:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:31.618 12:15:37 -- common/autotest_common.sh@871 -- # break 00:31:31.618 12:15:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:31.618 12:15:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:31.618 12:15:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:31.618 1+0 records in 00:31:31.618 1+0 records out 00:31:31.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483027 s, 8.5 MB/s 00:31:31.618 12:15:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.618 12:15:37 -- common/autotest_common.sh@884 -- # size=4096 00:31:31.618 12:15:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.618 12:15:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:31.618 12:15:37 -- common/autotest_common.sh@887 -- # return 0 00:31:31.618 12:15:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:31.618 12:15:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:31.618 12:15:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:31:31.877 /dev/nbd1 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:31.877 12:15:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:31.877 12:15:37 -- common/autotest_common.sh@867 -- # local i 00:31:31.877 12:15:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:31.877 12:15:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:31.877 12:15:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:31.877 12:15:37 -- common/autotest_common.sh@871 -- # break 00:31:31.877 12:15:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:31.877 12:15:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:31.877 12:15:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:31.877 1+0 records in 00:31:31.877 1+0 records out 00:31:31.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837958 s, 4.9 MB/s 00:31:31.877 12:15:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.877 12:15:37 -- common/autotest_common.sh@884 -- # size=4096 00:31:31.877 12:15:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.877 12:15:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:31.877 12:15:37 -- common/autotest_common.sh@887 -- # return 0 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.877 12:15:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:32.445 { 00:31:32.445 "nbd_device": "/dev/nbd0", 00:31:32.445 "bdev_name": "Nvme0n1p1" 00:31:32.445 }, 00:31:32.445 { 00:31:32.445 "nbd_device": "/dev/nbd1", 00:31:32.445 "bdev_name": "Nvme0n1p2" 00:31:32.445 } 00:31:32.445 ]' 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:32.445 { 00:31:32.445 "nbd_device": "/dev/nbd0", 00:31:32.445 "bdev_name": "Nvme0n1p1" 00:31:32.445 }, 00:31:32.445 { 00:31:32.445 "nbd_device": "/dev/nbd1", 00:31:32.445 "bdev_name": "Nvme0n1p2" 00:31:32.445 } 00:31:32.445 ]' 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:32.445 /dev/nbd1' 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:32.445 /dev/nbd1' 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@65 -- # count=2 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@95 -- # count=2 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:32.445 12:15:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:32.446 256+0 records in 00:31:32.446 256+0 records out 00:31:32.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0082985 s, 126 MB/s 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:32.446 256+0 records in 00:31:32.446 256+0 records out 00:31:32.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0821637 s, 12.8 MB/s 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:32.446 256+0 records in 00:31:32.446 256+0 records out 00:31:32.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0819305 s, 12.8 MB/s 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@51 -- # local i 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:32.446 12:15:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:32.705 12:15:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@41 -- # break 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@45 -- # return 0 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:32.963 12:15:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@41 -- # break 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@45 -- # return 0 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.222 12:15:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@65 -- # true 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@65 -- # count=0 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@104 -- # count=0 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@109 -- # return 0 00:31:33.481 12:15:38 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:33.481 12:15:38 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:33.738 malloc_lvol_verify 00:31:33.738 12:15:39 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:33.997 3b8fe651-5de0-4da9-9163-7b5f7c7043b8 00:31:33.997 12:15:39 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:34.330 648bd783-b786-4644-b1ed-19fe4b7c2d68 00:31:34.330 12:15:39 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:34.589 /dev/nbd0 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:34.589 mke2fs 1.46.5 (30-Dec-2021) 00:31:34.589 00:31:34.589 Filesystem too small for a journal 00:31:34.589 Discarding device blocks: 0/1024 done 00:31:34.589 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:34.589 00:31:34.589 Allocating group tables: 0/1 done 00:31:34.589 Writing inode tables: 0/1 done 00:31:34.589 Writing superblocks and filesystem accounting information: 0/1 done 00:31:34.589 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@51 -- # local i 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:34.589 12:15:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@41 -- # break 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@45 -- # return 0 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:34.848 12:15:40 -- bdev/nbd_common.sh@147 -- # return 0 00:31:34.848 12:15:40 -- bdev/blockdev.sh@324 -- # killprocess 148514 00:31:34.848 12:15:40 -- common/autotest_common.sh@936 -- # '[' -z 148514 ']' 00:31:34.848 12:15:40 -- common/autotest_common.sh@940 -- # kill -0 148514 00:31:34.848 12:15:40 -- common/autotest_common.sh@941 -- # uname 00:31:34.848 12:15:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:34.848 12:15:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148514 00:31:34.848 12:15:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:34.848 12:15:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:34.848 killing process with pid 148514 00:31:34.848 12:15:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148514' 00:31:34.848 12:15:40 -- common/autotest_common.sh@955 -- # kill 148514 00:31:34.848 12:15:40 -- common/autotest_common.sh@960 -- # wait 148514 00:31:35.107 12:15:40 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:35.107 00:31:35.107 real 0m6.611s 00:31:35.107 user 0m10.026s 00:31:35.107 sys 0m1.815s 00:31:35.107 ************************************ 00:31:35.107 END TEST bdev_nbd 00:31:35.107 ************************************ 00:31:35.107 12:15:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:35.107 12:15:40 -- common/autotest_common.sh@10 -- # set +x 00:31:35.107 12:15:40 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:35.107 12:15:40 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:31:35.107 12:15:40 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:31:35.107 skipping fio tests on NVMe due to multi-ns failures. 00:31:35.107 12:15:40 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:31:35.108 12:15:40 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:35.108 12:15:40 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:35.108 12:15:40 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:35.108 12:15:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:35.108 12:15:40 -- common/autotest_common.sh@10 -- # set +x 00:31:35.108 ************************************ 00:31:35.108 START TEST bdev_verify 00:31:35.108 ************************************ 00:31:35.108 12:15:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:35.366 [2024-11-29 12:15:40.630808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:35.366 [2024-11-29 12:15:40.631025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148766 ] 00:31:35.366 [2024-11-29 12:15:40.779337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:35.366 [2024-11-29 12:15:40.878456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.366 [2024-11-29 12:15:40.878464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.624 Running I/O for 5 seconds... 00:31:40.888 00:31:40.888 Latency(us) 00:31:40.888 [2024-11-29T12:15:46.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.888 [2024-11-29T12:15:46.399Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:40.888 Verification LBA range: start 0x0 length 0x4ff80 00:31:40.888 Nvme0n1p1 : 5.02 7862.33 30.71 0.00 0.00 16232.62 3440.64 18469.24 00:31:40.888 [2024-11-29T12:15:46.399Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:40.888 Verification LBA range: start 0x4ff80 length 0x4ff80 00:31:40.888 Nvme0n1p1 : 5.01 7859.27 30.70 0.00 0.00 16242.62 1921.40 20614.05 00:31:40.888 [2024-11-29T12:15:46.399Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:40.888 Verification LBA range: start 0x0 length 0x4ff7f 00:31:40.888 Nvme0n1p2 : 5.02 7858.15 30.70 0.00 0.00 16225.97 3798.11 16562.73 00:31:40.888 [2024-11-29T12:15:46.399Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:40.888 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:31:40.888 Nvme0n1p2 : 5.02 7863.25 30.72 0.00 0.00 16214.45 800.58 16443.58 00:31:40.888 [2024-11-29T12:15:46.399Z] =================================================================================================================== 00:31:40.888 [2024-11-29T12:15:46.399Z] Total : 31443.00 122.82 0.00 0.00 16228.91 800.58 20614.05 00:31:45.073 00:31:45.073 real 0m9.220s 00:31:45.073 user 0m17.609s 00:31:45.073 sys 0m0.272s 00:31:45.073 12:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:45.073 12:15:49 -- common/autotest_common.sh@10 -- # set +x 00:31:45.073 ************************************ 00:31:45.073 END TEST bdev_verify 00:31:45.073 ************************************ 00:31:45.073 12:15:49 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:45.073 12:15:49 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:45.073 12:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:45.073 12:15:49 -- common/autotest_common.sh@10 -- # set +x 00:31:45.073 ************************************ 00:31:45.073 START TEST bdev_verify_big_io 00:31:45.073 ************************************ 00:31:45.073 12:15:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:45.073 [2024-11-29 12:15:49.899482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:45.073 [2024-11-29 12:15:49.899672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148862 ] 00:31:45.073 [2024-11-29 12:15:50.044903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:45.073 [2024-11-29 12:15:50.138276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.073 [2024-11-29 12:15:50.138270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.073 Running I/O for 5 seconds... 00:31:50.344 00:31:50.344 Latency(us) 00:31:50.344 [2024-11-29T12:15:55.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.344 [2024-11-29T12:15:55.855Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:50.344 Verification LBA range: start 0x0 length 0x4ff8 00:31:50.344 Nvme0n1p1 : 5.11 867.91 54.24 0.00 0.00 145811.76 2636.33 224967.21 00:31:50.344 [2024-11-29T12:15:55.855Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:50.344 Verification LBA range: start 0x4ff8 length 0x4ff8 00:31:50.344 Nvme0n1p1 : 5.11 826.45 51.65 0.00 0.00 152879.14 2800.17 232593.22 00:31:50.344 [2024-11-29T12:15:55.855Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:50.344 Verification LBA range: start 0x0 length 0x4ff7 00:31:50.344 Nvme0n1p2 : 5.12 867.23 54.20 0.00 0.00 143850.08 5093.93 165865.66 00:31:50.344 [2024-11-29T12:15:55.855Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:50.344 Verification LBA range: start 0x4ff7 length 0x4ff7 00:31:50.344 Nvme0n1p2 : 5.12 833.52 52.09 0.00 0.00 149531.46 2398.02 170631.91 00:31:50.344 [2024-11-29T12:15:55.855Z] =================================================================================================================== 00:31:50.344 [2024-11-29T12:15:55.855Z] Total : 3395.10 212.19 0.00 0.00 147943.15 2398.02 232593.22 00:31:50.910 00:31:50.910 real 0m6.264s 00:31:50.910 user 0m11.790s 00:31:50.910 sys 0m0.223s 00:31:50.910 12:15:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:50.910 ************************************ 00:31:50.910 END TEST bdev_verify_big_io 00:31:50.910 ************************************ 00:31:50.910 12:15:56 -- common/autotest_common.sh@10 -- # set +x 00:31:50.910 12:15:56 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:50.910 12:15:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:50.910 12:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:50.910 12:15:56 -- common/autotest_common.sh@10 -- # set +x 00:31:50.910 ************************************ 00:31:50.910 START TEST bdev_write_zeroes 00:31:50.910 ************************************ 00:31:50.910 12:15:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:50.910 [2024-11-29 12:15:56.221744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:50.910 [2024-11-29 12:15:56.222019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148956 ] 00:31:50.910 [2024-11-29 12:15:56.365033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.168 [2024-11-29 12:15:56.446502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.168 Running I/O for 1 seconds... 00:31:52.541 00:31:52.541 Latency(us) 00:31:52.541 [2024-11-29T12:15:58.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.541 [2024-11-29T12:15:58.053Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:52.542 Nvme0n1p1 : 1.00 24521.32 95.79 0.00 0.00 5209.23 2323.55 15490.33 00:31:52.542 [2024-11-29T12:15:58.053Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:52.542 Nvme0n1p2 : 1.01 24499.12 95.70 0.00 0.00 5205.15 2398.02 15728.64 00:31:52.542 [2024-11-29T12:15:58.053Z] =================================================================================================================== 00:31:52.542 [2024-11-29T12:15:58.053Z] Total : 49020.45 191.49 0.00 0.00 5207.19 2323.55 15728.64 00:31:52.542 00:31:52.542 real 0m1.786s 00:31:52.542 user 0m1.498s 00:31:52.542 sys 0m0.189s 00:31:52.542 ************************************ 00:31:52.542 END TEST bdev_write_zeroes 00:31:52.542 ************************************ 00:31:52.542 12:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:52.542 12:15:57 -- common/autotest_common.sh@10 -- # set +x 00:31:52.542 12:15:57 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:52.542 12:15:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:52.542 12:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.542 12:15:57 -- common/autotest_common.sh@10 -- # set +x 00:31:52.542 ************************************ 00:31:52.542 START TEST bdev_json_nonenclosed 00:31:52.542 ************************************ 00:31:52.542 12:15:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:52.801 [2024-11-29 12:15:58.056928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:52.801 [2024-11-29 12:15:58.057211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149003 ] 00:31:52.801 [2024-11-29 12:15:58.205536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.801 [2024-11-29 12:15:58.286148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.801 [2024-11-29 12:15:58.286450] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:52.801 [2024-11-29 12:15:58.286496] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:53.061 00:31:53.061 real 0m0.418s 00:31:53.061 user 0m0.218s 00:31:53.061 sys 0m0.101s 00:31:53.061 ************************************ 00:31:53.061 END TEST bdev_json_nonenclosed 00:31:53.061 ************************************ 00:31:53.061 12:15:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:53.061 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:31:53.061 12:15:58 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:53.061 12:15:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:53.061 12:15:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:53.061 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:31:53.061 ************************************ 00:31:53.061 START TEST bdev_json_nonarray 00:31:53.061 ************************************ 00:31:53.061 12:15:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:53.061 [2024-11-29 12:15:58.524423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:53.061 [2024-11-29 12:15:58.524669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149025 ] 00:31:53.330 [2024-11-29 12:15:58.674912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.330 [2024-11-29 12:15:58.766388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.330 [2024-11-29 12:15:58.766609] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:53.330 [2024-11-29 12:15:58.766653] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:53.605 00:31:53.605 real 0m0.415s 00:31:53.605 user 0m0.225s 00:31:53.605 sys 0m0.090s 00:31:53.605 12:15:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:53.605 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:31:53.605 ************************************ 00:31:53.605 END TEST bdev_json_nonarray 00:31:53.605 ************************************ 00:31:53.605 12:15:58 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:31:53.605 12:15:58 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:31:53.605 12:15:58 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:31:53.605 12:15:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:53.605 12:15:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:53.605 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:31:53.605 ************************************ 00:31:53.605 START TEST bdev_gpt_uuid 00:31:53.605 ************************************ 00:31:53.605 12:15:58 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:31:53.605 12:15:58 -- bdev/blockdev.sh@612 -- # local bdev 00:31:53.605 12:15:58 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:31:53.605 12:15:58 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=149063 00:31:53.605 12:15:58 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:53.605 12:15:58 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:53.605 12:15:58 -- bdev/blockdev.sh@47 -- # waitforlisten 149063 00:31:53.605 12:15:58 -- common/autotest_common.sh@829 -- # '[' -z 149063 ']' 00:31:53.605 12:15:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.605 12:15:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:53.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.605 12:15:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.605 12:15:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:53.605 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:31:53.605 [2024-11-29 12:15:59.007311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:53.605 [2024-11-29 12:15:59.007580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149063 ] 00:31:53.863 [2024-11-29 12:15:59.154031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.863 [2024-11-29 12:15:59.243875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:53.863 [2024-11-29 12:15:59.244199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.799 12:15:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:54.799 12:15:59 -- common/autotest_common.sh@862 -- # return 0 00:31:54.799 12:15:59 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:54.799 12:15:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.800 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:31:54.800 Some configs were skipped because the RPC state that can call them passed over. 00:31:54.800 12:16:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:31:54.800 12:16:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.800 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:31:54.800 12:16:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:31:54.800 12:16:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.800 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:31:54.800 12:16:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@619 -- # bdev='[ 00:31:54.800 { 00:31:54.800 "name": "Nvme0n1p1", 00:31:54.800 "aliases": [ 00:31:54.800 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:31:54.800 ], 00:31:54.800 "product_name": "GPT Disk", 00:31:54.800 "block_size": 4096, 00:31:54.800 "num_blocks": 655104, 00:31:54.800 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:31:54.800 "assigned_rate_limits": { 00:31:54.800 "rw_ios_per_sec": 0, 00:31:54.800 "rw_mbytes_per_sec": 0, 00:31:54.800 "r_mbytes_per_sec": 0, 00:31:54.800 "w_mbytes_per_sec": 0 00:31:54.800 }, 00:31:54.800 "claimed": false, 00:31:54.800 "zoned": false, 00:31:54.800 "supported_io_types": { 00:31:54.800 "read": true, 00:31:54.800 "write": true, 00:31:54.800 "unmap": true, 00:31:54.800 "write_zeroes": true, 00:31:54.800 "flush": true, 00:31:54.800 "reset": true, 00:31:54.800 "compare": true, 00:31:54.800 "compare_and_write": false, 00:31:54.800 "abort": true, 00:31:54.800 "nvme_admin": false, 00:31:54.800 "nvme_io": false 00:31:54.800 }, 00:31:54.800 "driver_specific": { 00:31:54.800 "gpt": { 00:31:54.800 "base_bdev": "Nvme0n1", 00:31:54.800 "offset_blocks": 256, 00:31:54.800 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:31:54.800 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:31:54.800 "partition_name": "SPDK_TEST_first" 00:31:54.800 } 00:31:54.800 } 00:31:54.800 } 00:31:54.800 ]' 00:31:54.800 12:16:00 -- bdev/blockdev.sh@620 -- # jq -r length 00:31:54.800 12:16:00 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:31:54.800 12:16:00 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:54.800 12:16:00 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:54.800 12:16:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.800 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:31:54.800 12:16:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.800 12:16:00 -- bdev/blockdev.sh@624 -- # bdev='[ 00:31:54.800 { 00:31:54.800 "name": "Nvme0n1p2", 00:31:54.800 "aliases": [ 00:31:54.800 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:31:54.800 ], 00:31:54.800 "product_name": "GPT Disk", 00:31:54.800 "block_size": 4096, 00:31:54.800 "num_blocks": 655103, 00:31:54.800 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:54.800 "assigned_rate_limits": { 00:31:54.800 "rw_ios_per_sec": 0, 00:31:54.800 "rw_mbytes_per_sec": 0, 00:31:54.800 "r_mbytes_per_sec": 0, 00:31:54.800 "w_mbytes_per_sec": 0 00:31:54.800 }, 00:31:54.800 "claimed": false, 00:31:54.800 "zoned": false, 00:31:54.800 "supported_io_types": { 00:31:54.800 "read": true, 00:31:54.800 "write": true, 00:31:54.800 "unmap": true, 00:31:54.800 "write_zeroes": true, 00:31:54.800 "flush": true, 00:31:54.800 "reset": true, 00:31:54.800 "compare": true, 00:31:54.800 "compare_and_write": false, 00:31:54.800 "abort": true, 00:31:54.800 "nvme_admin": false, 00:31:54.800 "nvme_io": false 00:31:54.800 }, 00:31:54.800 "driver_specific": { 00:31:54.800 "gpt": { 00:31:54.800 "base_bdev": "Nvme0n1", 00:31:54.800 "offset_blocks": 655360, 00:31:54.800 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:31:54.800 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:54.800 "partition_name": "SPDK_TEST_second" 00:31:54.800 } 00:31:54.800 } 00:31:54.800 } 00:31:54.800 ]' 00:31:54.800 12:16:00 -- bdev/blockdev.sh@625 -- # jq -r length 00:31:55.059 12:16:00 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:31:55.059 12:16:00 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:31:55.059 12:16:00 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:55.059 12:16:00 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:55.059 12:16:00 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:55.059 12:16:00 -- bdev/blockdev.sh@629 -- # killprocess 149063 00:31:55.059 12:16:00 -- common/autotest_common.sh@936 -- # '[' -z 149063 ']' 00:31:55.059 12:16:00 -- common/autotest_common.sh@940 -- # kill -0 149063 00:31:55.059 12:16:00 -- common/autotest_common.sh@941 -- # uname 00:31:55.059 12:16:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:55.059 12:16:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149063 00:31:55.059 12:16:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:55.059 12:16:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:55.059 killing process with pid 149063 00:31:55.059 12:16:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149063' 00:31:55.059 12:16:00 -- common/autotest_common.sh@955 -- # kill 149063 00:31:55.059 12:16:00 -- common/autotest_common.sh@960 -- # wait 149063 00:31:55.625 00:31:55.625 real 0m1.952s 00:31:55.625 user 0m2.223s 00:31:55.625 sys 0m0.465s 00:31:55.625 12:16:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:55.625 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:31:55.625 ************************************ 00:31:55.625 END TEST bdev_gpt_uuid 00:31:55.625 ************************************ 00:31:55.625 12:16:00 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:31:55.625 12:16:00 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:31:55.625 12:16:00 -- bdev/blockdev.sh@809 -- # cleanup 00:31:55.625 12:16:00 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:55.625 12:16:00 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:55.625 12:16:00 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:31:55.625 12:16:00 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:31:55.625 12:16:00 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:31:55.625 12:16:00 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:55.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:55.883 Waiting for block devices as requested 00:31:55.883 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:55.883 12:16:01 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:31:55.883 12:16:01 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:31:56.142 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:31:56.142 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:31:56.142 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:31:56.142 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:31:56.142 12:16:01 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:31:56.142 00:31:56.142 real 0m36.229s 00:31:56.142 user 0m55.159s 00:31:56.142 sys 0m6.224s 00:31:56.142 12:16:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:56.142 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:31:56.142 ************************************ 00:31:56.142 END TEST blockdev_nvme_gpt 00:31:56.142 ************************************ 00:31:56.142 12:16:01 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:56.142 12:16:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:56.142 12:16:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:56.142 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:31:56.142 ************************************ 00:31:56.142 START TEST nvme 00:31:56.142 ************************************ 00:31:56.142 12:16:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:56.142 * Looking for test storage... 00:31:56.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:56.142 12:16:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:56.142 12:16:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:56.142 12:16:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:56.142 12:16:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:56.142 12:16:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:56.142 12:16:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:56.142 12:16:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:56.142 12:16:01 -- scripts/common.sh@335 -- # IFS=.-: 00:31:56.142 12:16:01 -- scripts/common.sh@335 -- # read -ra ver1 00:31:56.142 12:16:01 -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.142 12:16:01 -- scripts/common.sh@336 -- # read -ra ver2 00:31:56.142 12:16:01 -- scripts/common.sh@337 -- # local 'op=<' 00:31:56.142 12:16:01 -- scripts/common.sh@339 -- # ver1_l=2 00:31:56.142 12:16:01 -- scripts/common.sh@340 -- # ver2_l=1 00:31:56.142 12:16:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:56.142 12:16:01 -- scripts/common.sh@343 -- # case "$op" in 00:31:56.142 12:16:01 -- scripts/common.sh@344 -- # : 1 00:31:56.142 12:16:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:56.142 12:16:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.142 12:16:01 -- scripts/common.sh@364 -- # decimal 1 00:31:56.142 12:16:01 -- scripts/common.sh@352 -- # local d=1 00:31:56.142 12:16:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.142 12:16:01 -- scripts/common.sh@354 -- # echo 1 00:31:56.142 12:16:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:56.142 12:16:01 -- scripts/common.sh@365 -- # decimal 2 00:31:56.142 12:16:01 -- scripts/common.sh@352 -- # local d=2 00:31:56.142 12:16:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.142 12:16:01 -- scripts/common.sh@354 -- # echo 2 00:31:56.142 12:16:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:56.142 12:16:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:56.142 12:16:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:56.142 12:16:01 -- scripts/common.sh@367 -- # return 0 00:31:56.142 12:16:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.142 12:16:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.142 --rc genhtml_branch_coverage=1 00:31:56.142 --rc genhtml_function_coverage=1 00:31:56.142 --rc genhtml_legend=1 00:31:56.142 --rc geninfo_all_blocks=1 00:31:56.142 --rc geninfo_unexecuted_blocks=1 00:31:56.142 00:31:56.142 ' 00:31:56.142 12:16:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.142 --rc genhtml_branch_coverage=1 00:31:56.142 --rc genhtml_function_coverage=1 00:31:56.142 --rc genhtml_legend=1 00:31:56.142 --rc geninfo_all_blocks=1 00:31:56.142 --rc geninfo_unexecuted_blocks=1 00:31:56.142 00:31:56.142 ' 00:31:56.142 12:16:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.142 --rc genhtml_branch_coverage=1 00:31:56.142 --rc genhtml_function_coverage=1 00:31:56.142 --rc genhtml_legend=1 00:31:56.142 --rc geninfo_all_blocks=1 00:31:56.142 --rc geninfo_unexecuted_blocks=1 00:31:56.142 00:31:56.142 ' 00:31:56.142 12:16:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.142 --rc genhtml_branch_coverage=1 00:31:56.142 --rc genhtml_function_coverage=1 00:31:56.142 --rc genhtml_legend=1 00:31:56.142 --rc geninfo_all_blocks=1 00:31:56.142 --rc geninfo_unexecuted_blocks=1 00:31:56.142 00:31:56.142 ' 00:31:56.142 12:16:01 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:56.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:56.709 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:57.645 12:16:03 -- nvme/nvme.sh@79 -- # uname 00:31:57.645 12:16:03 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:31:57.645 12:16:03 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:31:57.645 12:16:03 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:31:57.645 12:16:03 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:31:57.645 12:16:03 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:31:57.645 12:16:03 -- common/autotest_common.sh@1055 -- # echo 0 00:31:57.645 12:16:03 -- common/autotest_common.sh@1057 -- # stubpid=149462 00:31:57.645 Waiting for stub to ready for secondary processes... 00:31:57.645 12:16:03 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:31:57.645 12:16:03 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:31:57.645 12:16:03 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:57.645 12:16:03 -- common/autotest_common.sh@1061 -- # [[ -e /proc/149462 ]] 00:31:57.645 12:16:03 -- common/autotest_common.sh@1062 -- # sleep 1s 00:31:57.904 [2024-11-29 12:16:03.185660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:31:57.904 [2024-11-29 12:16:03.185916] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.840 12:16:04 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:58.840 12:16:04 -- common/autotest_common.sh@1061 -- # [[ -e /proc/149462 ]] 00:31:58.840 12:16:04 -- common/autotest_common.sh@1062 -- # sleep 1s 00:31:59.098 [2024-11-29 12:16:04.445484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.098 [2024-11-29 12:16:04.515494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.098 [2024-11-29 12:16:04.515655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.098 [2024-11-29 12:16:04.515645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.098 [2024-11-29 12:16:04.525737] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:31:59.098 [2024-11-29 12:16:04.536943] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:31:59.098 [2024-11-29 12:16:04.537984] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:31:59.666 12:16:05 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:59.666 done. 00:31:59.666 12:16:05 -- common/autotest_common.sh@1064 -- # echo done. 00:31:59.666 12:16:05 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:31:59.666 12:16:05 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:31:59.666 12:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:59.666 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:31:59.666 ************************************ 00:31:59.666 START TEST nvme_reset 00:31:59.666 ************************************ 00:31:59.666 12:16:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:31:59.924 Initializing NVMe Controllers 00:31:59.924 Skipping QEMU NVMe SSD at 0000:00:06.0 00:31:59.924 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:31:59.924 00:31:59.924 real 0m0.250s 00:31:59.924 user 0m0.099s 00:31:59.924 sys 0m0.095s 00:31:59.924 12:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:59.924 ************************************ 00:31:59.924 END TEST nvme_reset 00:31:59.924 ************************************ 00:31:59.924 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:32:00.182 12:16:05 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:32:00.182 12:16:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:00.182 12:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:00.182 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:32:00.182 ************************************ 00:32:00.182 START TEST nvme_identify 00:32:00.182 ************************************ 00:32:00.182 12:16:05 -- common/autotest_common.sh@1114 -- # nvme_identify 00:32:00.182 12:16:05 -- nvme/nvme.sh@12 -- # bdfs=() 00:32:00.182 12:16:05 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:32:00.182 12:16:05 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:32:00.182 12:16:05 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:32:00.182 12:16:05 -- common/autotest_common.sh@1508 -- # bdfs=() 00:32:00.182 12:16:05 -- common/autotest_common.sh@1508 -- # local bdfs 00:32:00.182 12:16:05 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:00.182 12:16:05 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:00.182 12:16:05 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:32:00.182 12:16:05 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:32:00.182 12:16:05 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:32:00.182 12:16:05 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:32:00.441 [2024-11-29 12:16:05.755680] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 149501 terminated unexpected 00:32:00.441 ===================================================== 00:32:00.441 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:00.441 ===================================================== 00:32:00.441 Controller Capabilities/Features 00:32:00.441 ================================ 00:32:00.441 Vendor ID: 1b36 00:32:00.441 Subsystem Vendor ID: 1af4 00:32:00.441 Serial Number: 12340 00:32:00.441 Model Number: QEMU NVMe Ctrl 00:32:00.441 Firmware Version: 8.0.0 00:32:00.441 Recommended Arb Burst: 6 00:32:00.441 IEEE OUI Identifier: 00 54 52 00:32:00.441 Multi-path I/O 00:32:00.441 May have multiple subsystem ports: No 00:32:00.441 May have multiple controllers: No 00:32:00.441 Associated with SR-IOV VF: No 00:32:00.441 Max Data Transfer Size: 524288 00:32:00.441 Max Number of Namespaces: 256 00:32:00.441 Max Number of I/O Queues: 64 00:32:00.441 NVMe Specification Version (VS): 1.4 00:32:00.441 NVMe Specification Version (Identify): 1.4 00:32:00.441 Maximum Queue Entries: 2048 00:32:00.441 Contiguous Queues Required: Yes 00:32:00.441 Arbitration Mechanisms Supported 00:32:00.441 Weighted Round Robin: Not Supported 00:32:00.441 Vendor Specific: Not Supported 00:32:00.441 Reset Timeout: 7500 ms 00:32:00.441 Doorbell Stride: 4 bytes 00:32:00.441 NVM Subsystem Reset: Not Supported 00:32:00.441 Command Sets Supported 00:32:00.441 NVM Command Set: Supported 00:32:00.441 Boot Partition: Not Supported 00:32:00.441 Memory Page Size Minimum: 4096 bytes 00:32:00.441 Memory Page Size Maximum: 65536 bytes 00:32:00.441 Persistent Memory Region: Not Supported 00:32:00.441 Optional Asynchronous Events Supported 00:32:00.441 Namespace Attribute Notices: Supported 00:32:00.441 Firmware Activation Notices: Not Supported 00:32:00.441 ANA Change Notices: Not Supported 00:32:00.441 PLE Aggregate Log Change Notices: Not Supported 00:32:00.441 LBA Status Info Alert Notices: Not Supported 00:32:00.441 EGE Aggregate Log Change Notices: Not Supported 00:32:00.441 Normal NVM Subsystem Shutdown event: Not Supported 00:32:00.441 Zone Descriptor Change Notices: Not Supported 00:32:00.441 Discovery Log Change Notices: Not Supported 00:32:00.441 Controller Attributes 00:32:00.441 128-bit Host Identifier: Not Supported 00:32:00.441 Non-Operational Permissive Mode: Not Supported 00:32:00.441 NVM Sets: Not Supported 00:32:00.441 Read Recovery Levels: Not Supported 00:32:00.441 Endurance Groups: Not Supported 00:32:00.441 Predictable Latency Mode: Not Supported 00:32:00.441 Traffic Based Keep ALive: Not Supported 00:32:00.441 Namespace Granularity: Not Supported 00:32:00.441 SQ Associations: Not Supported 00:32:00.441 UUID List: Not Supported 00:32:00.441 Multi-Domain Subsystem: Not Supported 00:32:00.441 Fixed Capacity Management: Not Supported 00:32:00.441 Variable Capacity Management: Not Supported 00:32:00.441 Delete Endurance Group: Not Supported 00:32:00.441 Delete NVM Set: Not Supported 00:32:00.441 Extended LBA Formats Supported: Supported 00:32:00.441 Flexible Data Placement Supported: Not Supported 00:32:00.441 00:32:00.441 Controller Memory Buffer Support 00:32:00.441 ================================ 00:32:00.441 Supported: No 00:32:00.441 00:32:00.441 Persistent Memory Region Support 00:32:00.441 ================================ 00:32:00.441 Supported: No 00:32:00.441 00:32:00.441 Admin Command Set Attributes 00:32:00.441 ============================ 00:32:00.441 Security Send/Receive: Not Supported 00:32:00.441 Format NVM: Supported 00:32:00.441 Firmware Activate/Download: Not Supported 00:32:00.441 Namespace Management: Supported 00:32:00.441 Device Self-Test: Not Supported 00:32:00.441 Directives: Supported 00:32:00.441 NVMe-MI: Not Supported 00:32:00.441 Virtualization Management: Not Supported 00:32:00.441 Doorbell Buffer Config: Supported 00:32:00.441 Get LBA Status Capability: Not Supported 00:32:00.441 Command & Feature Lockdown Capability: Not Supported 00:32:00.441 Abort Command Limit: 4 00:32:00.441 Async Event Request Limit: 4 00:32:00.441 Number of Firmware Slots: N/A 00:32:00.441 Firmware Slot 1 Read-Only: N/A 00:32:00.441 Firmware Activation Without Reset: N/A 00:32:00.441 Multiple Update Detection Support: N/A 00:32:00.441 Firmware Update Granularity: No Information Provided 00:32:00.441 Per-Namespace SMART Log: Yes 00:32:00.441 Asymmetric Namespace Access Log Page: Not Supported 00:32:00.441 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:00.441 Command Effects Log Page: Supported 00:32:00.441 Get Log Page Extended Data: Supported 00:32:00.441 Telemetry Log Pages: Not Supported 00:32:00.441 Persistent Event Log Pages: Not Supported 00:32:00.441 Supported Log Pages Log Page: May Support 00:32:00.441 Commands Supported & Effects Log Page: Not Supported 00:32:00.441 Feature Identifiers & Effects Log Page:May Support 00:32:00.441 NVMe-MI Commands & Effects Log Page: May Support 00:32:00.441 Data Area 4 for Telemetry Log: Not Supported 00:32:00.441 Error Log Page Entries Supported: 1 00:32:00.441 Keep Alive: Not Supported 00:32:00.441 00:32:00.441 NVM Command Set Attributes 00:32:00.441 ========================== 00:32:00.441 Submission Queue Entry Size 00:32:00.441 Max: 64 00:32:00.441 Min: 64 00:32:00.441 Completion Queue Entry Size 00:32:00.441 Max: 16 00:32:00.441 Min: 16 00:32:00.441 Number of Namespaces: 256 00:32:00.441 Compare Command: Supported 00:32:00.441 Write Uncorrectable Command: Not Supported 00:32:00.441 Dataset Management Command: Supported 00:32:00.441 Write Zeroes Command: Supported 00:32:00.441 Set Features Save Field: Supported 00:32:00.441 Reservations: Not Supported 00:32:00.441 Timestamp: Supported 00:32:00.441 Copy: Supported 00:32:00.441 Volatile Write Cache: Present 00:32:00.441 Atomic Write Unit (Normal): 1 00:32:00.441 Atomic Write Unit (PFail): 1 00:32:00.441 Atomic Compare & Write Unit: 1 00:32:00.441 Fused Compare & Write: Not Supported 00:32:00.441 Scatter-Gather List 00:32:00.441 SGL Command Set: Supported 00:32:00.442 SGL Keyed: Not Supported 00:32:00.442 SGL Bit Bucket Descriptor: Not Supported 00:32:00.442 SGL Metadata Pointer: Not Supported 00:32:00.442 Oversized SGL: Not Supported 00:32:00.442 SGL Metadata Address: Not Supported 00:32:00.442 SGL Offset: Not Supported 00:32:00.442 Transport SGL Data Block: Not Supported 00:32:00.442 Replay Protected Memory Block: Not Supported 00:32:00.442 00:32:00.442 Firmware Slot Information 00:32:00.442 ========================= 00:32:00.442 Active slot: 1 00:32:00.442 Slot 1 Firmware Revision: 1.0 00:32:00.442 00:32:00.442 00:32:00.442 Commands Supported and Effects 00:32:00.442 ============================== 00:32:00.442 Admin Commands 00:32:00.442 -------------- 00:32:00.442 Delete I/O Submission Queue (00h): Supported 00:32:00.442 Create I/O Submission Queue (01h): Supported 00:32:00.442 Get Log Page (02h): Supported 00:32:00.442 Delete I/O Completion Queue (04h): Supported 00:32:00.442 Create I/O Completion Queue (05h): Supported 00:32:00.442 Identify (06h): Supported 00:32:00.442 Abort (08h): Supported 00:32:00.442 Set Features (09h): Supported 00:32:00.442 Get Features (0Ah): Supported 00:32:00.442 Asynchronous Event Request (0Ch): Supported 00:32:00.442 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:00.442 Directive Send (19h): Supported 00:32:00.442 Directive Receive (1Ah): Supported 00:32:00.442 Virtualization Management (1Ch): Supported 00:32:00.442 Doorbell Buffer Config (7Ch): Supported 00:32:00.442 Format NVM (80h): Supported LBA-Change 00:32:00.442 I/O Commands 00:32:00.442 ------------ 00:32:00.442 Flush (00h): Supported LBA-Change 00:32:00.442 Write (01h): Supported LBA-Change 00:32:00.442 Read (02h): Supported 00:32:00.442 Compare (05h): Supported 00:32:00.442 Write Zeroes (08h): Supported LBA-Change 00:32:00.442 Dataset Management (09h): Supported LBA-Change 00:32:00.442 Unknown (0Ch): Supported 00:32:00.442 Unknown (12h): Supported 00:32:00.442 Copy (19h): Supported LBA-Change 00:32:00.442 Unknown (1Dh): Supported LBA-Change 00:32:00.442 00:32:00.442 Error Log 00:32:00.442 ========= 00:32:00.442 00:32:00.442 Arbitration 00:32:00.442 =========== 00:32:00.442 Arbitration Burst: no limit 00:32:00.442 00:32:00.442 Power Management 00:32:00.442 ================ 00:32:00.442 Number of Power States: 1 00:32:00.442 Current Power State: Power State #0 00:32:00.442 Power State #0: 00:32:00.442 Max Power: 25.00 W 00:32:00.442 Non-Operational State: Operational 00:32:00.442 Entry Latency: 16 microseconds 00:32:00.442 Exit Latency: 4 microseconds 00:32:00.442 Relative Read Throughput: 0 00:32:00.442 Relative Read Latency: 0 00:32:00.442 Relative Write Throughput: 0 00:32:00.442 Relative Write Latency: 0 00:32:00.442 Idle Power: Not Reported 00:32:00.442 Active Power: Not Reported 00:32:00.442 Non-Operational Permissive Mode: Not Supported 00:32:00.442 00:32:00.442 Health Information 00:32:00.442 ================== 00:32:00.442 Critical Warnings: 00:32:00.442 Available Spare Space: OK 00:32:00.442 Temperature: OK 00:32:00.442 Device Reliability: OK 00:32:00.442 Read Only: No 00:32:00.442 Volatile Memory Backup: OK 00:32:00.442 Current Temperature: 323 Kelvin (50 Celsius) 00:32:00.442 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:00.442 Available Spare: 0% 00:32:00.442 Available Spare Threshold: 0% 00:32:00.442 Life Percentage Used: 0% 00:32:00.442 Data Units Read: 7779 00:32:00.442 Data Units Written: 3792 00:32:00.442 Host Read Commands: 383305 00:32:00.442 Host Write Commands: 206877 00:32:00.442 Controller Busy Time: 0 minutes 00:32:00.442 Power Cycles: 0 00:32:00.442 Power On Hours: 0 hours 00:32:00.442 Unsafe Shutdowns: 0 00:32:00.442 Unrecoverable Media Errors: 0 00:32:00.442 Lifetime Error Log Entries: 0 00:32:00.442 Warning Temperature Time: 0 minutes 00:32:00.442 Critical Temperature Time: 0 minutes 00:32:00.442 00:32:00.442 Number of Queues 00:32:00.442 ================ 00:32:00.442 Number of I/O Submission Queues: 64 00:32:00.442 Number of I/O Completion Queues: 64 00:32:00.442 00:32:00.442 ZNS Specific Controller Data 00:32:00.442 ============================ 00:32:00.442 Zone Append Size Limit: 0 00:32:00.442 00:32:00.442 00:32:00.442 Active Namespaces 00:32:00.442 ================= 00:32:00.442 Namespace ID:1 00:32:00.442 Error Recovery Timeout: Unlimited 00:32:00.442 Command Set Identifier: NVM (00h) 00:32:00.442 Deallocate: Supported 00:32:00.442 Deallocated/Unwritten Error: Supported 00:32:00.442 Deallocated Read Value: All 0x00 00:32:00.442 Deallocate in Write Zeroes: Not Supported 00:32:00.442 Deallocated Guard Field: 0xFFFF 00:32:00.442 Flush: Supported 00:32:00.442 Reservation: Not Supported 00:32:00.442 Namespace Sharing Capabilities: Private 00:32:00.442 Size (in LBAs): 1310720 (5GiB) 00:32:00.442 Capacity (in LBAs): 1310720 (5GiB) 00:32:00.442 Utilization (in LBAs): 1310720 (5GiB) 00:32:00.442 Thin Provisioning: Not Supported 00:32:00.442 Per-NS Atomic Units: No 00:32:00.442 Maximum Single Source Range Length: 128 00:32:00.442 Maximum Copy Length: 128 00:32:00.442 Maximum Source Range Count: 128 00:32:00.442 NGUID/EUI64 Never Reused: No 00:32:00.442 Namespace Write Protected: No 00:32:00.442 Number of LBA Formats: 8 00:32:00.442 Current LBA Format: LBA Format #04 00:32:00.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:00.442 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:00.442 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:00.442 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:00.442 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:00.442 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:00.442 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:00.442 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:00.442 00:32:00.442 12:16:05 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:00.442 12:16:05 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:32:00.703 ===================================================== 00:32:00.703 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:00.703 ===================================================== 00:32:00.703 Controller Capabilities/Features 00:32:00.703 ================================ 00:32:00.703 Vendor ID: 1b36 00:32:00.703 Subsystem Vendor ID: 1af4 00:32:00.703 Serial Number: 12340 00:32:00.703 Model Number: QEMU NVMe Ctrl 00:32:00.703 Firmware Version: 8.0.0 00:32:00.703 Recommended Arb Burst: 6 00:32:00.703 IEEE OUI Identifier: 00 54 52 00:32:00.703 Multi-path I/O 00:32:00.703 May have multiple subsystem ports: No 00:32:00.703 May have multiple controllers: No 00:32:00.703 Associated with SR-IOV VF: No 00:32:00.703 Max Data Transfer Size: 524288 00:32:00.703 Max Number of Namespaces: 256 00:32:00.703 Max Number of I/O Queues: 64 00:32:00.703 NVMe Specification Version (VS): 1.4 00:32:00.703 NVMe Specification Version (Identify): 1.4 00:32:00.703 Maximum Queue Entries: 2048 00:32:00.703 Contiguous Queues Required: Yes 00:32:00.703 Arbitration Mechanisms Supported 00:32:00.703 Weighted Round Robin: Not Supported 00:32:00.703 Vendor Specific: Not Supported 00:32:00.703 Reset Timeout: 7500 ms 00:32:00.703 Doorbell Stride: 4 bytes 00:32:00.703 NVM Subsystem Reset: Not Supported 00:32:00.703 Command Sets Supported 00:32:00.703 NVM Command Set: Supported 00:32:00.703 Boot Partition: Not Supported 00:32:00.703 Memory Page Size Minimum: 4096 bytes 00:32:00.703 Memory Page Size Maximum: 65536 bytes 00:32:00.703 Persistent Memory Region: Not Supported 00:32:00.703 Optional Asynchronous Events Supported 00:32:00.703 Namespace Attribute Notices: Supported 00:32:00.703 Firmware Activation Notices: Not Supported 00:32:00.703 ANA Change Notices: Not Supported 00:32:00.703 PLE Aggregate Log Change Notices: Not Supported 00:32:00.703 LBA Status Info Alert Notices: Not Supported 00:32:00.703 EGE Aggregate Log Change Notices: Not Supported 00:32:00.703 Normal NVM Subsystem Shutdown event: Not Supported 00:32:00.703 Zone Descriptor Change Notices: Not Supported 00:32:00.703 Discovery Log Change Notices: Not Supported 00:32:00.703 Controller Attributes 00:32:00.703 128-bit Host Identifier: Not Supported 00:32:00.703 Non-Operational Permissive Mode: Not Supported 00:32:00.703 NVM Sets: Not Supported 00:32:00.703 Read Recovery Levels: Not Supported 00:32:00.703 Endurance Groups: Not Supported 00:32:00.703 Predictable Latency Mode: Not Supported 00:32:00.703 Traffic Based Keep ALive: Not Supported 00:32:00.703 Namespace Granularity: Not Supported 00:32:00.703 SQ Associations: Not Supported 00:32:00.703 UUID List: Not Supported 00:32:00.703 Multi-Domain Subsystem: Not Supported 00:32:00.703 Fixed Capacity Management: Not Supported 00:32:00.703 Variable Capacity Management: Not Supported 00:32:00.703 Delete Endurance Group: Not Supported 00:32:00.703 Delete NVM Set: Not Supported 00:32:00.703 Extended LBA Formats Supported: Supported 00:32:00.703 Flexible Data Placement Supported: Not Supported 00:32:00.703 00:32:00.703 Controller Memory Buffer Support 00:32:00.703 ================================ 00:32:00.703 Supported: No 00:32:00.703 00:32:00.703 Persistent Memory Region Support 00:32:00.703 ================================ 00:32:00.703 Supported: No 00:32:00.703 00:32:00.703 Admin Command Set Attributes 00:32:00.703 ============================ 00:32:00.703 Security Send/Receive: Not Supported 00:32:00.703 Format NVM: Supported 00:32:00.703 Firmware Activate/Download: Not Supported 00:32:00.703 Namespace Management: Supported 00:32:00.703 Device Self-Test: Not Supported 00:32:00.703 Directives: Supported 00:32:00.703 NVMe-MI: Not Supported 00:32:00.703 Virtualization Management: Not Supported 00:32:00.703 Doorbell Buffer Config: Supported 00:32:00.703 Get LBA Status Capability: Not Supported 00:32:00.703 Command & Feature Lockdown Capability: Not Supported 00:32:00.703 Abort Command Limit: 4 00:32:00.703 Async Event Request Limit: 4 00:32:00.703 Number of Firmware Slots: N/A 00:32:00.703 Firmware Slot 1 Read-Only: N/A 00:32:00.703 Firmware Activation Without Reset: N/A 00:32:00.703 Multiple Update Detection Support: N/A 00:32:00.703 Firmware Update Granularity: No Information Provided 00:32:00.703 Per-Namespace SMART Log: Yes 00:32:00.703 Asymmetric Namespace Access Log Page: Not Supported 00:32:00.703 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:00.703 Command Effects Log Page: Supported 00:32:00.703 Get Log Page Extended Data: Supported 00:32:00.703 Telemetry Log Pages: Not Supported 00:32:00.703 Persistent Event Log Pages: Not Supported 00:32:00.703 Supported Log Pages Log Page: May Support 00:32:00.703 Commands Supported & Effects Log Page: Not Supported 00:32:00.703 Feature Identifiers & Effects Log Page:May Support 00:32:00.703 NVMe-MI Commands & Effects Log Page: May Support 00:32:00.703 Data Area 4 for Telemetry Log: Not Supported 00:32:00.703 Error Log Page Entries Supported: 1 00:32:00.703 Keep Alive: Not Supported 00:32:00.703 00:32:00.703 NVM Command Set Attributes 00:32:00.703 ========================== 00:32:00.703 Submission Queue Entry Size 00:32:00.703 Max: 64 00:32:00.703 Min: 64 00:32:00.703 Completion Queue Entry Size 00:32:00.703 Max: 16 00:32:00.703 Min: 16 00:32:00.703 Number of Namespaces: 256 00:32:00.703 Compare Command: Supported 00:32:00.703 Write Uncorrectable Command: Not Supported 00:32:00.703 Dataset Management Command: Supported 00:32:00.703 Write Zeroes Command: Supported 00:32:00.703 Set Features Save Field: Supported 00:32:00.703 Reservations: Not Supported 00:32:00.703 Timestamp: Supported 00:32:00.703 Copy: Supported 00:32:00.703 Volatile Write Cache: Present 00:32:00.703 Atomic Write Unit (Normal): 1 00:32:00.703 Atomic Write Unit (PFail): 1 00:32:00.703 Atomic Compare & Write Unit: 1 00:32:00.703 Fused Compare & Write: Not Supported 00:32:00.703 Scatter-Gather List 00:32:00.703 SGL Command Set: Supported 00:32:00.703 SGL Keyed: Not Supported 00:32:00.703 SGL Bit Bucket Descriptor: Not Supported 00:32:00.703 SGL Metadata Pointer: Not Supported 00:32:00.703 Oversized SGL: Not Supported 00:32:00.703 SGL Metadata Address: Not Supported 00:32:00.704 SGL Offset: Not Supported 00:32:00.704 Transport SGL Data Block: Not Supported 00:32:00.704 Replay Protected Memory Block: Not Supported 00:32:00.704 00:32:00.704 Firmware Slot Information 00:32:00.704 ========================= 00:32:00.704 Active slot: 1 00:32:00.704 Slot 1 Firmware Revision: 1.0 00:32:00.704 00:32:00.704 00:32:00.704 Commands Supported and Effects 00:32:00.704 ============================== 00:32:00.704 Admin Commands 00:32:00.704 -------------- 00:32:00.704 Delete I/O Submission Queue (00h): Supported 00:32:00.704 Create I/O Submission Queue (01h): Supported 00:32:00.704 Get Log Page (02h): Supported 00:32:00.704 Delete I/O Completion Queue (04h): Supported 00:32:00.704 Create I/O Completion Queue (05h): Supported 00:32:00.704 Identify (06h): Supported 00:32:00.704 Abort (08h): Supported 00:32:00.704 Set Features (09h): Supported 00:32:00.704 Get Features (0Ah): Supported 00:32:00.704 Asynchronous Event Request (0Ch): Supported 00:32:00.704 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:00.704 Directive Send (19h): Supported 00:32:00.704 Directive Receive (1Ah): Supported 00:32:00.704 Virtualization Management (1Ch): Supported 00:32:00.704 Doorbell Buffer Config (7Ch): Supported 00:32:00.704 Format NVM (80h): Supported LBA-Change 00:32:00.704 I/O Commands 00:32:00.704 ------------ 00:32:00.704 Flush (00h): Supported LBA-Change 00:32:00.704 Write (01h): Supported LBA-Change 00:32:00.704 Read (02h): Supported 00:32:00.704 Compare (05h): Supported 00:32:00.704 Write Zeroes (08h): Supported LBA-Change 00:32:00.704 Dataset Management (09h): Supported LBA-Change 00:32:00.704 Unknown (0Ch): Supported 00:32:00.704 Unknown (12h): Supported 00:32:00.704 Copy (19h): Supported LBA-Change 00:32:00.704 Unknown (1Dh): Supported LBA-Change 00:32:00.704 00:32:00.704 Error Log 00:32:00.704 ========= 00:32:00.704 00:32:00.704 Arbitration 00:32:00.704 =========== 00:32:00.704 Arbitration Burst: no limit 00:32:00.704 00:32:00.704 Power Management 00:32:00.704 ================ 00:32:00.704 Number of Power States: 1 00:32:00.704 Current Power State: Power State #0 00:32:00.704 Power State #0: 00:32:00.704 Max Power: 25.00 W 00:32:00.704 Non-Operational State: Operational 00:32:00.704 Entry Latency: 16 microseconds 00:32:00.704 Exit Latency: 4 microseconds 00:32:00.704 Relative Read Throughput: 0 00:32:00.704 Relative Read Latency: 0 00:32:00.704 Relative Write Throughput: 0 00:32:00.704 Relative Write Latency: 0 00:32:00.704 Idle Power: Not Reported 00:32:00.704 Active Power: Not Reported 00:32:00.704 Non-Operational Permissive Mode: Not Supported 00:32:00.704 00:32:00.704 Health Information 00:32:00.704 ================== 00:32:00.704 Critical Warnings: 00:32:00.704 Available Spare Space: OK 00:32:00.704 Temperature: OK 00:32:00.704 Device Reliability: OK 00:32:00.704 Read Only: No 00:32:00.704 Volatile Memory Backup: OK 00:32:00.704 Current Temperature: 323 Kelvin (50 Celsius) 00:32:00.704 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:00.704 Available Spare: 0% 00:32:00.704 Available Spare Threshold: 0% 00:32:00.704 Life Percentage Used: 0% 00:32:00.704 Data Units Read: 7779 00:32:00.704 Data Units Written: 3792 00:32:00.704 Host Read Commands: 383305 00:32:00.704 Host Write Commands: 206877 00:32:00.704 Controller Busy Time: 0 minutes 00:32:00.704 Power Cycles: 0 00:32:00.704 Power On Hours: 0 hours 00:32:00.704 Unsafe Shutdowns: 0 00:32:00.704 Unrecoverable Media Errors: 0 00:32:00.704 Lifetime Error Log Entries: 0 00:32:00.704 Warning Temperature Time: 0 minutes 00:32:00.704 Critical Temperature Time: 0 minutes 00:32:00.704 00:32:00.704 Number of Queues 00:32:00.704 ================ 00:32:00.704 Number of I/O Submission Queues: 64 00:32:00.704 Number of I/O Completion Queues: 64 00:32:00.704 00:32:00.704 ZNS Specific Controller Data 00:32:00.704 ============================ 00:32:00.704 Zone Append Size Limit: 0 00:32:00.704 00:32:00.704 00:32:00.704 Active Namespaces 00:32:00.704 ================= 00:32:00.704 Namespace ID:1 00:32:00.704 Error Recovery Timeout: Unlimited 00:32:00.704 Command Set Identifier: NVM (00h) 00:32:00.704 Deallocate: Supported 00:32:00.704 Deallocated/Unwritten Error: Supported 00:32:00.704 Deallocated Read Value: All 0x00 00:32:00.704 Deallocate in Write Zeroes: Not Supported 00:32:00.704 Deallocated Guard Field: 0xFFFF 00:32:00.704 Flush: Supported 00:32:00.704 Reservation: Not Supported 00:32:00.704 Namespace Sharing Capabilities: Private 00:32:00.704 Size (in LBAs): 1310720 (5GiB) 00:32:00.704 Capacity (in LBAs): 1310720 (5GiB) 00:32:00.704 Utilization (in LBAs): 1310720 (5GiB) 00:32:00.704 Thin Provisioning: Not Supported 00:32:00.704 Per-NS Atomic Units: No 00:32:00.704 Maximum Single Source Range Length: 128 00:32:00.704 Maximum Copy Length: 128 00:32:00.704 Maximum Source Range Count: 128 00:32:00.704 NGUID/EUI64 Never Reused: No 00:32:00.704 Namespace Write Protected: No 00:32:00.704 Number of LBA Formats: 8 00:32:00.704 Current LBA Format: LBA Format #04 00:32:00.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:00.704 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:00.704 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:00.704 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:00.704 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:00.704 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:00.704 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:00.704 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:00.704 00:32:00.704 00:32:00.704 real 0m0.600s 00:32:00.704 user 0m0.200s 00:32:00.704 sys 0m0.289s 00:32:00.704 12:16:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:00.704 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:32:00.704 ************************************ 00:32:00.704 END TEST nvme_identify 00:32:00.704 ************************************ 00:32:00.704 12:16:06 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:32:00.704 12:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:00.704 12:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:00.704 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:32:00.704 ************************************ 00:32:00.704 START TEST nvme_perf 00:32:00.704 ************************************ 00:32:00.704 12:16:06 -- common/autotest_common.sh@1114 -- # nvme_perf 00:32:00.704 12:16:06 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:32:02.082 Initializing NVMe Controllers 00:32:02.082 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:02.082 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:02.082 Initialization complete. Launching workers. 00:32:02.082 ======================================================== 00:32:02.082 Latency(us) 00:32:02.082 Device Information : IOPS MiB/s Average min max 00:32:02.082 PCIE (0000:00:06.0) NSID 1 from core 0: 57216.00 670.50 2238.69 1186.29 5761.65 00:32:02.082 ======================================================== 00:32:02.082 Total : 57216.00 670.50 2238.69 1186.29 5761.65 00:32:02.082 00:32:02.082 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:02.082 ================================================================================= 00:32:02.082 1.00000% : 1310.720us 00:32:02.082 10.00000% : 1526.691us 00:32:02.082 25.00000% : 1794.793us 00:32:02.082 50.00000% : 2234.182us 00:32:02.082 75.00000% : 2666.124us 00:32:02.082 90.00000% : 2934.225us 00:32:02.082 95.00000% : 3127.855us 00:32:02.082 98.00000% : 3351.273us 00:32:02.082 99.00000% : 3470.429us 00:32:02.082 99.50000% : 3664.058us 00:32:02.082 99.90000% : 4647.098us 00:32:02.082 99.99000% : 5630.138us 00:32:02.082 99.99900% : 5779.084us 00:32:02.082 99.99990% : 5779.084us 00:32:02.082 99.99999% : 5779.084us 00:32:02.082 00:32:02.082 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:02.082 ============================================================================== 00:32:02.082 Range in us Cumulative IO count 00:32:02.082 1184.116 - 1191.564: 0.0035% ( 2) 00:32:02.082 1191.564 - 1199.011: 0.0105% ( 4) 00:32:02.082 1199.011 - 1206.458: 0.0192% ( 5) 00:32:02.082 1206.458 - 1213.905: 0.0245% ( 3) 00:32:02.082 1213.905 - 1221.353: 0.0385% ( 8) 00:32:02.082 1221.353 - 1228.800: 0.0577% ( 11) 00:32:02.082 1228.800 - 1236.247: 0.0996% ( 24) 00:32:02.082 1236.247 - 1243.695: 0.1241% ( 14) 00:32:02.082 1243.695 - 1251.142: 0.1643% ( 23) 00:32:02.082 1251.142 - 1258.589: 0.2342% ( 40) 00:32:02.082 1258.589 - 1266.036: 0.3041% ( 40) 00:32:02.082 1266.036 - 1273.484: 0.4177% ( 65) 00:32:02.082 1273.484 - 1280.931: 0.5261% ( 62) 00:32:02.082 1280.931 - 1288.378: 0.6572% ( 75) 00:32:02.082 1288.378 - 1295.825: 0.8162% ( 91) 00:32:02.082 1295.825 - 1303.273: 0.9438% ( 73) 00:32:02.082 1303.273 - 1310.720: 1.1570% ( 122) 00:32:02.082 1310.720 - 1318.167: 1.3213% ( 94) 00:32:02.082 1318.167 - 1325.615: 1.5380% ( 124) 00:32:02.082 1325.615 - 1333.062: 1.7670% ( 131) 00:32:02.082 1333.062 - 1340.509: 1.9732% ( 118) 00:32:02.082 1340.509 - 1347.956: 2.2319% ( 148) 00:32:02.082 1347.956 - 1355.404: 2.4976% ( 152) 00:32:02.082 1355.404 - 1362.851: 2.7335% ( 135) 00:32:02.082 1362.851 - 1370.298: 3.0096% ( 158) 00:32:02.082 1370.298 - 1377.745: 3.2771% ( 153) 00:32:02.082 1377.745 - 1385.193: 3.5654% ( 165) 00:32:02.082 1385.193 - 1392.640: 3.8328% ( 153) 00:32:02.082 1392.640 - 1400.087: 4.1317% ( 171) 00:32:02.082 1400.087 - 1407.535: 4.4411% ( 177) 00:32:02.082 1407.535 - 1414.982: 4.7347% ( 168) 00:32:02.082 1414.982 - 1422.429: 5.0825% ( 199) 00:32:02.082 1422.429 - 1429.876: 5.3726% ( 166) 00:32:02.082 1429.876 - 1437.324: 5.7274% ( 203) 00:32:02.082 1437.324 - 1444.771: 6.0682% ( 195) 00:32:02.082 1444.771 - 1452.218: 6.3776% ( 177) 00:32:02.082 1452.218 - 1459.665: 6.7499% ( 213) 00:32:02.082 1459.665 - 1467.113: 7.0767% ( 187) 00:32:02.082 1467.113 - 1474.560: 7.4682% ( 224) 00:32:02.082 1474.560 - 1482.007: 7.8195% ( 201) 00:32:02.082 1482.007 - 1489.455: 8.2075% ( 222) 00:32:02.082 1489.455 - 1496.902: 8.5815% ( 214) 00:32:02.082 1496.902 - 1504.349: 8.9451% ( 208) 00:32:02.082 1504.349 - 1511.796: 9.3488% ( 231) 00:32:02.082 1511.796 - 1519.244: 9.7071% ( 205) 00:32:02.082 1519.244 - 1526.691: 10.1108% ( 231) 00:32:02.082 1526.691 - 1534.138: 10.5006% ( 223) 00:32:02.082 1534.138 - 1541.585: 10.8886% ( 222) 00:32:02.082 1541.585 - 1549.033: 11.3273% ( 251) 00:32:02.082 1549.033 - 1556.480: 11.6820% ( 203) 00:32:02.082 1556.480 - 1563.927: 12.1365% ( 260) 00:32:02.082 1563.927 - 1571.375: 12.5192% ( 219) 00:32:02.082 1571.375 - 1578.822: 12.9632% ( 254) 00:32:02.082 1578.822 - 1586.269: 13.3774% ( 237) 00:32:02.082 1586.269 - 1593.716: 13.7706% ( 225) 00:32:02.082 1593.716 - 1601.164: 14.2355% ( 266) 00:32:02.082 1601.164 - 1608.611: 14.6130% ( 216) 00:32:02.082 1608.611 - 1616.058: 15.0745% ( 264) 00:32:02.082 1616.058 - 1623.505: 15.4729% ( 228) 00:32:02.082 1623.505 - 1630.953: 15.9116% ( 251) 00:32:02.082 1630.953 - 1638.400: 16.3381% ( 244) 00:32:02.082 1638.400 - 1645.847: 16.7715% ( 248) 00:32:02.082 1645.847 - 1653.295: 17.2190% ( 256) 00:32:02.082 1653.295 - 1660.742: 17.6594% ( 252) 00:32:02.082 1660.742 - 1668.189: 18.0911% ( 247) 00:32:02.082 1668.189 - 1675.636: 18.5298% ( 251) 00:32:02.082 1675.636 - 1683.084: 18.9440% ( 237) 00:32:02.082 1683.084 - 1690.531: 19.3809% ( 250) 00:32:02.082 1690.531 - 1697.978: 19.8144% ( 248) 00:32:02.082 1697.978 - 1705.425: 20.2618% ( 256) 00:32:02.082 1705.425 - 1712.873: 20.6638% ( 230) 00:32:02.082 1712.873 - 1720.320: 21.0920% ( 245) 00:32:02.082 1720.320 - 1727.767: 21.5377% ( 255) 00:32:02.082 1727.767 - 1735.215: 21.9659% ( 245) 00:32:02.082 1735.215 - 1742.662: 22.4238% ( 262) 00:32:02.082 1742.662 - 1750.109: 22.8345% ( 235) 00:32:02.082 1750.109 - 1757.556: 23.2627% ( 245) 00:32:02.082 1757.556 - 1765.004: 23.6962% ( 248) 00:32:02.082 1765.004 - 1772.451: 24.1034% ( 233) 00:32:02.082 1772.451 - 1779.898: 24.5578% ( 260) 00:32:02.082 1779.898 - 1787.345: 24.9685% ( 235) 00:32:02.082 1787.345 - 1794.793: 25.3810% ( 236) 00:32:02.082 1794.793 - 1802.240: 25.8057% ( 243) 00:32:02.082 1802.240 - 1809.687: 26.2182% ( 236) 00:32:02.082 1809.687 - 1817.135: 26.6691% ( 258) 00:32:02.082 1817.135 - 1824.582: 27.0903% ( 241) 00:32:02.082 1824.582 - 1832.029: 27.5238% ( 248) 00:32:02.082 1832.029 - 1839.476: 27.9502% ( 244) 00:32:02.082 1839.476 - 1846.924: 28.3732% ( 242) 00:32:02.082 1846.924 - 1854.371: 28.7926% ( 240) 00:32:02.082 1854.371 - 1861.818: 29.2313% ( 251) 00:32:02.082 1861.818 - 1869.265: 29.6578% ( 244) 00:32:02.082 1869.265 - 1876.713: 30.0755% ( 239) 00:32:02.082 1876.713 - 1884.160: 30.5089% ( 248) 00:32:02.082 1884.160 - 1891.607: 30.9232% ( 237) 00:32:02.082 1891.607 - 1899.055: 31.3356% ( 236) 00:32:02.082 1899.055 - 1906.502: 31.7761% ( 252) 00:32:02.082 1906.502 - 1921.396: 32.6115% ( 478) 00:32:02.082 1921.396 - 1936.291: 33.4592% ( 485) 00:32:02.082 1936.291 - 1951.185: 34.3313% ( 499) 00:32:02.082 1951.185 - 1966.080: 35.1964% ( 495) 00:32:02.082 1966.080 - 1980.975: 36.0196% ( 471) 00:32:02.082 1980.975 - 1995.869: 36.8726% ( 488) 00:32:02.082 1995.869 - 2010.764: 37.7359% ( 494) 00:32:02.082 2010.764 - 2025.658: 38.5784% ( 482) 00:32:02.082 2025.658 - 2040.553: 39.4435% ( 495) 00:32:02.082 2040.553 - 2055.447: 40.3034% ( 492) 00:32:02.082 2055.447 - 2070.342: 41.1511% ( 485) 00:32:02.082 2070.342 - 2085.236: 41.9970% ( 484) 00:32:02.082 2085.236 - 2100.131: 42.8674% ( 498) 00:32:02.082 2100.131 - 2115.025: 43.7150% ( 485) 00:32:02.082 2115.025 - 2129.920: 44.5714% ( 490) 00:32:02.082 2129.920 - 2144.815: 45.4226% ( 487) 00:32:02.082 2144.815 - 2159.709: 46.2773% ( 489) 00:32:02.082 2159.709 - 2174.604: 47.1337% ( 490) 00:32:02.082 2174.604 - 2189.498: 47.9918% ( 491) 00:32:02.082 2189.498 - 2204.393: 48.8447% ( 488) 00:32:02.082 2204.393 - 2219.287: 49.7116% ( 496) 00:32:02.082 2219.287 - 2234.182: 50.5768% ( 495) 00:32:02.082 2234.182 - 2249.076: 51.4437% ( 496) 00:32:02.082 2249.076 - 2263.971: 52.3105% ( 496) 00:32:02.082 2263.971 - 2278.865: 53.1652% ( 489) 00:32:02.082 2278.865 - 2293.760: 54.0268% ( 493) 00:32:02.082 2293.760 - 2308.655: 54.8990% ( 499) 00:32:02.082 2308.655 - 2323.549: 55.7571% ( 491) 00:32:02.082 2323.549 - 2338.444: 56.6293% ( 499) 00:32:02.082 2338.444 - 2353.338: 57.4787% ( 486) 00:32:02.082 2353.338 - 2368.233: 58.3403% ( 493) 00:32:02.082 2368.233 - 2383.127: 59.2055% ( 495) 00:32:02.082 2383.127 - 2398.022: 60.0776% ( 499) 00:32:02.082 2398.022 - 2412.916: 60.9497% ( 499) 00:32:02.082 2412.916 - 2427.811: 61.8131% ( 494) 00:32:02.082 2427.811 - 2442.705: 62.6625% ( 486) 00:32:02.082 2442.705 - 2457.600: 63.5382% ( 501) 00:32:02.082 2457.600 - 2472.495: 64.3911% ( 488) 00:32:02.083 2472.495 - 2487.389: 65.2912% ( 515) 00:32:02.083 2487.389 - 2502.284: 66.1528% ( 493) 00:32:02.083 2502.284 - 2517.178: 67.0040% ( 487) 00:32:02.083 2517.178 - 2532.073: 67.8761% ( 499) 00:32:02.083 2532.073 - 2546.967: 68.7255% ( 486) 00:32:02.083 2546.967 - 2561.862: 69.5872% ( 493) 00:32:02.083 2561.862 - 2576.756: 70.4628% ( 501) 00:32:02.083 2576.756 - 2591.651: 71.3262% ( 494) 00:32:02.083 2591.651 - 2606.545: 72.1878% ( 493) 00:32:02.083 2606.545 - 2621.440: 73.0512% ( 494) 00:32:02.083 2621.440 - 2636.335: 73.9391% ( 508) 00:32:02.083 2636.335 - 2651.229: 74.7973% ( 491) 00:32:02.083 2651.229 - 2666.124: 75.6624% ( 495) 00:32:02.083 2666.124 - 2681.018: 76.5310% ( 497) 00:32:02.083 2681.018 - 2695.913: 77.3979% ( 496) 00:32:02.083 2695.913 - 2710.807: 78.2543% ( 490) 00:32:02.083 2710.807 - 2725.702: 79.1282% ( 500) 00:32:02.083 2725.702 - 2740.596: 80.0021% ( 500) 00:32:02.083 2740.596 - 2755.491: 80.8655% ( 494) 00:32:02.083 2755.491 - 2770.385: 81.7324% ( 496) 00:32:02.083 2770.385 - 2785.280: 82.5975% ( 495) 00:32:02.083 2785.280 - 2800.175: 83.4609% ( 494) 00:32:02.083 2800.175 - 2815.069: 84.3138% ( 488) 00:32:02.083 2815.069 - 2829.964: 85.1807% ( 496) 00:32:02.083 2829.964 - 2844.858: 86.0581% ( 502) 00:32:02.083 2844.858 - 2859.753: 86.8743% ( 467) 00:32:02.083 2859.753 - 2874.647: 87.7167% ( 482) 00:32:02.083 2874.647 - 2889.542: 88.4927% ( 444) 00:32:02.083 2889.542 - 2904.436: 89.2320% ( 423) 00:32:02.083 2904.436 - 2919.331: 89.9049% ( 385) 00:32:02.083 2919.331 - 2934.225: 90.5166% ( 350) 00:32:02.083 2934.225 - 2949.120: 91.0899% ( 328) 00:32:02.083 2949.120 - 2964.015: 91.5968% ( 290) 00:32:02.083 2964.015 - 2978.909: 92.0389% ( 253) 00:32:02.083 2978.909 - 2993.804: 92.4567% ( 239) 00:32:02.083 2993.804 - 3008.698: 92.8219% ( 209) 00:32:02.083 3008.698 - 3023.593: 93.1697% ( 199) 00:32:02.083 3023.593 - 3038.487: 93.4931% ( 185) 00:32:02.083 3038.487 - 3053.382: 93.8024% ( 177) 00:32:02.083 3053.382 - 3068.276: 94.1030% ( 172) 00:32:02.083 3068.276 - 3083.171: 94.3879% ( 163) 00:32:02.083 3083.171 - 3098.065: 94.6658% ( 159) 00:32:02.083 3098.065 - 3112.960: 94.9297% ( 151) 00:32:02.083 3112.960 - 3127.855: 95.1744% ( 140) 00:32:02.083 3127.855 - 3142.749: 95.4348% ( 149) 00:32:02.083 3142.749 - 3157.644: 95.6848% ( 143) 00:32:02.083 3157.644 - 3172.538: 95.9085% ( 128) 00:32:02.083 3172.538 - 3187.433: 96.1165% ( 119) 00:32:02.083 3187.433 - 3202.327: 96.3157% ( 114) 00:32:02.083 3202.327 - 3217.222: 96.5132% ( 113) 00:32:02.083 3217.222 - 3232.116: 96.6985% ( 106) 00:32:02.083 3232.116 - 3247.011: 96.8802% ( 104) 00:32:02.083 3247.011 - 3261.905: 97.0585% ( 102) 00:32:02.083 3261.905 - 3276.800: 97.2211% ( 93) 00:32:02.083 3276.800 - 3291.695: 97.3819% ( 92) 00:32:02.083 3291.695 - 3306.589: 97.5357% ( 88) 00:32:02.083 3306.589 - 3321.484: 97.6930% ( 90) 00:32:02.083 3321.484 - 3336.378: 97.8503% ( 90) 00:32:02.083 3336.378 - 3351.273: 98.0006% ( 86) 00:32:02.083 3351.273 - 3366.167: 98.1509% ( 86) 00:32:02.083 3366.167 - 3381.062: 98.2889% ( 79) 00:32:02.083 3381.062 - 3395.956: 98.4375% ( 85) 00:32:02.083 3395.956 - 3410.851: 98.5791% ( 81) 00:32:02.083 3410.851 - 3425.745: 98.7067% ( 73) 00:32:02.083 3425.745 - 3440.640: 98.8342% ( 73) 00:32:02.083 3440.640 - 3455.535: 98.9444% ( 63) 00:32:02.083 3455.535 - 3470.429: 99.0422% ( 56) 00:32:02.083 3470.429 - 3485.324: 99.1314% ( 51) 00:32:02.083 3485.324 - 3500.218: 99.1978% ( 38) 00:32:02.083 3500.218 - 3515.113: 99.2520% ( 31) 00:32:02.083 3515.113 - 3530.007: 99.2974% ( 26) 00:32:02.083 3530.007 - 3544.902: 99.3324% ( 20) 00:32:02.083 3544.902 - 3559.796: 99.3603% ( 16) 00:32:02.083 3559.796 - 3574.691: 99.3918% ( 18) 00:32:02.083 3574.691 - 3589.585: 99.4180% ( 15) 00:32:02.083 3589.585 - 3604.480: 99.4407% ( 13) 00:32:02.083 3604.480 - 3619.375: 99.4617% ( 12) 00:32:02.083 3619.375 - 3634.269: 99.4809% ( 11) 00:32:02.083 3634.269 - 3649.164: 99.4966% ( 9) 00:32:02.083 3649.164 - 3664.058: 99.5159% ( 11) 00:32:02.083 3664.058 - 3678.953: 99.5316% ( 9) 00:32:02.083 3678.953 - 3693.847: 99.5438% ( 7) 00:32:02.083 3693.847 - 3708.742: 99.5596% ( 9) 00:32:02.083 3708.742 - 3723.636: 99.5701% ( 6) 00:32:02.083 3723.636 - 3738.531: 99.5840% ( 8) 00:32:02.083 3738.531 - 3753.425: 99.5998% ( 9) 00:32:02.083 3753.425 - 3768.320: 99.6102% ( 6) 00:32:02.083 3768.320 - 3783.215: 99.6190% ( 5) 00:32:02.083 3783.215 - 3798.109: 99.6295% ( 6) 00:32:02.083 3798.109 - 3813.004: 99.6382% ( 5) 00:32:02.083 3813.004 - 3842.793: 99.6557% ( 10) 00:32:02.083 3842.793 - 3872.582: 99.6714% ( 9) 00:32:02.083 3872.582 - 3902.371: 99.6854% ( 8) 00:32:02.083 3902.371 - 3932.160: 99.6976% ( 7) 00:32:02.083 3932.160 - 3961.949: 99.7099% ( 7) 00:32:02.083 3961.949 - 3991.738: 99.7221% ( 7) 00:32:02.083 3991.738 - 4021.527: 99.7343% ( 7) 00:32:02.083 4021.527 - 4051.316: 99.7466% ( 7) 00:32:02.083 4051.316 - 4081.105: 99.7606% ( 8) 00:32:02.083 4081.105 - 4110.895: 99.7710% ( 6) 00:32:02.083 4110.895 - 4140.684: 99.7850% ( 8) 00:32:02.083 4140.684 - 4170.473: 99.7990% ( 8) 00:32:02.083 4170.473 - 4200.262: 99.8112% ( 7) 00:32:02.083 4200.262 - 4230.051: 99.8235% ( 7) 00:32:02.083 4230.051 - 4259.840: 99.8340% ( 6) 00:32:02.083 4259.840 - 4289.629: 99.8427% ( 5) 00:32:02.083 4289.629 - 4319.418: 99.8514% ( 5) 00:32:02.083 4319.418 - 4349.207: 99.8584% ( 4) 00:32:02.083 4349.207 - 4378.996: 99.8637% ( 3) 00:32:02.083 4378.996 - 4408.785: 99.8707% ( 4) 00:32:02.083 4408.785 - 4438.575: 99.8759% ( 3) 00:32:02.083 4438.575 - 4468.364: 99.8812% ( 3) 00:32:02.083 4468.364 - 4498.153: 99.8881% ( 4) 00:32:02.083 4498.153 - 4527.942: 99.8899% ( 1) 00:32:02.083 4527.942 - 4557.731: 99.8934% ( 2) 00:32:02.083 4557.731 - 4587.520: 99.8951% ( 1) 00:32:02.083 4587.520 - 4617.309: 99.8986% ( 2) 00:32:02.083 4617.309 - 4647.098: 99.9021% ( 2) 00:32:02.083 4647.098 - 4676.887: 99.9039% ( 1) 00:32:02.083 4676.887 - 4706.676: 99.9074% ( 2) 00:32:02.083 4706.676 - 4736.465: 99.9109% ( 2) 00:32:02.083 4736.465 - 4766.255: 99.9126% ( 1) 00:32:02.083 4766.255 - 4796.044: 99.9161% ( 2) 00:32:02.083 4796.044 - 4825.833: 99.9179% ( 1) 00:32:02.083 4825.833 - 4855.622: 99.9214% ( 2) 00:32:02.083 4855.622 - 4885.411: 99.9231% ( 1) 00:32:02.083 4885.411 - 4915.200: 99.9266% ( 2) 00:32:02.083 4915.200 - 4944.989: 99.9301% ( 2) 00:32:02.083 4944.989 - 4974.778: 99.9318% ( 1) 00:32:02.083 4974.778 - 5004.567: 99.9353% ( 2) 00:32:02.083 5004.567 - 5034.356: 99.9371% ( 1) 00:32:02.083 5034.356 - 5064.145: 99.9406% ( 2) 00:32:02.083 5064.145 - 5093.935: 99.9423% ( 1) 00:32:02.083 5093.935 - 5123.724: 99.9458% ( 2) 00:32:02.083 5123.724 - 5153.513: 99.9493% ( 2) 00:32:02.083 5153.513 - 5183.302: 99.9511% ( 1) 00:32:02.083 5183.302 - 5213.091: 99.9546% ( 2) 00:32:02.083 5213.091 - 5242.880: 99.9563% ( 1) 00:32:02.083 5242.880 - 5272.669: 99.9598% ( 2) 00:32:02.083 5272.669 - 5302.458: 99.9615% ( 1) 00:32:02.083 5302.458 - 5332.247: 99.9650% ( 2) 00:32:02.083 5332.247 - 5362.036: 99.9685% ( 2) 00:32:02.083 5362.036 - 5391.825: 99.9703% ( 1) 00:32:02.083 5391.825 - 5421.615: 99.9738% ( 2) 00:32:02.083 5421.615 - 5451.404: 99.9773% ( 2) 00:32:02.083 5451.404 - 5481.193: 99.9790% ( 1) 00:32:02.083 5481.193 - 5510.982: 99.9808% ( 1) 00:32:02.083 5510.982 - 5540.771: 99.9843% ( 2) 00:32:02.083 5540.771 - 5570.560: 99.9860% ( 1) 00:32:02.083 5570.560 - 5600.349: 99.9895% ( 2) 00:32:02.083 5600.349 - 5630.138: 99.9930% ( 2) 00:32:02.083 5630.138 - 5659.927: 99.9948% ( 1) 00:32:02.083 5659.927 - 5689.716: 99.9983% ( 2) 00:32:02.083 5749.295 - 5779.084: 100.0000% ( 1) 00:32:02.083 00:32:02.083 12:16:07 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:32:03.458 Initializing NVMe Controllers 00:32:03.458 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:03.458 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:03.458 Initialization complete. Launching workers. 00:32:03.458 ======================================================== 00:32:03.458 Latency(us) 00:32:03.458 Device Information : IOPS MiB/s Average min max 00:32:03.458 PCIE (0000:00:06.0) NSID 1 from core 0: 53385.00 625.61 2397.33 1128.99 7562.20 00:32:03.458 ======================================================== 00:32:03.458 Total : 53385.00 625.61 2397.33 1128.99 7562.20 00:32:03.458 00:32:03.458 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:03.458 ================================================================================= 00:32:03.458 1.00000% : 1653.295us 00:32:03.458 10.00000% : 1884.160us 00:32:03.458 25.00000% : 2070.342us 00:32:03.458 50.00000% : 2293.760us 00:32:03.458 75.00000% : 2636.335us 00:32:03.458 90.00000% : 3098.065us 00:32:03.458 95.00000% : 3381.062us 00:32:03.458 98.00000% : 3649.164us 00:32:03.458 99.00000% : 3842.793us 00:32:03.458 99.50000% : 4289.629us 00:32:03.458 99.90000% : 5302.458us 00:32:03.458 99.99000% : 7417.484us 00:32:03.458 99.99900% : 7566.429us 00:32:03.458 99.99990% : 7566.429us 00:32:03.458 99.99999% : 7566.429us 00:32:03.458 00:32:03.458 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:03.458 ============================================================================== 00:32:03.458 Range in us Cumulative IO count 00:32:03.458 1124.538 - 1131.985: 0.0019% ( 1) 00:32:03.458 1213.905 - 1221.353: 0.0037% ( 1) 00:32:03.458 1243.695 - 1251.142: 0.0075% ( 2) 00:32:03.458 1251.142 - 1258.589: 0.0131% ( 3) 00:32:03.458 1266.036 - 1273.484: 0.0150% ( 1) 00:32:03.458 1273.484 - 1280.931: 0.0169% ( 1) 00:32:03.458 1295.825 - 1303.273: 0.0187% ( 1) 00:32:03.458 1310.720 - 1318.167: 0.0206% ( 1) 00:32:03.458 1347.956 - 1355.404: 0.0262% ( 3) 00:32:03.458 1355.404 - 1362.851: 0.0300% ( 2) 00:32:03.458 1362.851 - 1370.298: 0.0375% ( 4) 00:32:03.458 1370.298 - 1377.745: 0.0431% ( 3) 00:32:03.458 1377.745 - 1385.193: 0.0524% ( 5) 00:32:03.458 1385.193 - 1392.640: 0.0581% ( 3) 00:32:03.458 1392.640 - 1400.087: 0.0618% ( 2) 00:32:03.458 1400.087 - 1407.535: 0.0712% ( 5) 00:32:03.458 1407.535 - 1414.982: 0.0787% ( 4) 00:32:03.458 1414.982 - 1422.429: 0.0918% ( 7) 00:32:03.458 1422.429 - 1429.876: 0.1030% ( 6) 00:32:03.458 1429.876 - 1437.324: 0.1143% ( 6) 00:32:03.458 1437.324 - 1444.771: 0.1236% ( 5) 00:32:03.458 1444.771 - 1452.218: 0.1405% ( 9) 00:32:03.458 1452.218 - 1459.665: 0.1499% ( 5) 00:32:03.458 1459.665 - 1467.113: 0.1648% ( 8) 00:32:03.458 1467.113 - 1474.560: 0.1854% ( 11) 00:32:03.458 1474.560 - 1482.007: 0.1986% ( 7) 00:32:03.458 1482.007 - 1489.455: 0.2098% ( 6) 00:32:03.458 1489.455 - 1496.902: 0.2323% ( 12) 00:32:03.458 1496.902 - 1504.349: 0.2454% ( 7) 00:32:03.458 1504.349 - 1511.796: 0.2679% ( 12) 00:32:03.458 1511.796 - 1519.244: 0.2941% ( 14) 00:32:03.458 1519.244 - 1526.691: 0.3166% ( 12) 00:32:03.458 1526.691 - 1534.138: 0.3447% ( 15) 00:32:03.458 1534.138 - 1541.585: 0.3634% ( 10) 00:32:03.458 1541.585 - 1549.033: 0.3934% ( 16) 00:32:03.458 1549.033 - 1556.480: 0.4215% ( 15) 00:32:03.458 1556.480 - 1563.927: 0.4533% ( 17) 00:32:03.458 1563.927 - 1571.375: 0.4908% ( 20) 00:32:03.458 1571.375 - 1578.822: 0.5095% ( 10) 00:32:03.458 1578.822 - 1586.269: 0.5545% ( 24) 00:32:03.458 1586.269 - 1593.716: 0.5882% ( 18) 00:32:03.458 1593.716 - 1601.164: 0.6107% ( 12) 00:32:03.458 1601.164 - 1608.611: 0.6556% ( 24) 00:32:03.458 1608.611 - 1616.058: 0.6950% ( 21) 00:32:03.458 1616.058 - 1623.505: 0.7362% ( 22) 00:32:03.458 1623.505 - 1630.953: 0.7886% ( 28) 00:32:03.458 1630.953 - 1638.400: 0.9047% ( 62) 00:32:03.458 1638.400 - 1645.847: 0.9741% ( 37) 00:32:03.458 1645.847 - 1653.295: 1.0733% ( 53) 00:32:03.458 1653.295 - 1660.742: 1.1689% ( 51) 00:32:03.458 1660.742 - 1668.189: 1.2607% ( 49) 00:32:03.458 1668.189 - 1675.636: 1.3581% ( 52) 00:32:03.458 1675.636 - 1683.084: 1.4911% ( 71) 00:32:03.458 1683.084 - 1690.531: 1.6372% ( 78) 00:32:03.458 1690.531 - 1697.978: 1.7645% ( 68) 00:32:03.458 1697.978 - 1705.425: 1.9256% ( 86) 00:32:03.458 1705.425 - 1712.873: 2.1055% ( 96) 00:32:03.458 1712.873 - 1720.320: 2.3190% ( 114) 00:32:03.458 1720.320 - 1727.767: 2.4820% ( 87) 00:32:03.458 1727.767 - 1735.215: 2.7142% ( 124) 00:32:03.458 1735.215 - 1742.662: 2.9615% ( 132) 00:32:03.458 1742.662 - 1750.109: 3.2406% ( 149) 00:32:03.458 1750.109 - 1757.556: 3.4916% ( 134) 00:32:03.458 1757.556 - 1765.004: 3.7857% ( 157) 00:32:03.458 1765.004 - 1772.451: 4.0461% ( 139) 00:32:03.458 1772.451 - 1779.898: 4.4170% ( 198) 00:32:03.458 1779.898 - 1787.345: 4.6998% ( 151) 00:32:03.458 1787.345 - 1794.793: 4.9995% ( 160) 00:32:03.458 1794.793 - 1802.240: 5.2674% ( 143) 00:32:03.458 1802.240 - 1809.687: 5.6064% ( 181) 00:32:03.458 1809.687 - 1817.135: 5.9605% ( 189) 00:32:03.458 1817.135 - 1824.582: 6.3913% ( 230) 00:32:03.458 1824.582 - 1832.029: 6.8259% ( 232) 00:32:03.458 1832.029 - 1839.476: 7.1780% ( 188) 00:32:03.458 1839.476 - 1846.924: 7.7700% ( 316) 00:32:03.458 1846.924 - 1854.371: 8.2401% ( 251) 00:32:03.458 1854.371 - 1861.818: 8.8058% ( 302) 00:32:03.458 1861.818 - 1869.265: 9.2011% ( 211) 00:32:03.458 1869.265 - 1876.713: 9.6488% ( 239) 00:32:03.458 1876.713 - 1884.160: 10.1133% ( 248) 00:32:03.458 1884.160 - 1891.607: 10.6210% ( 271) 00:32:03.458 1891.607 - 1899.055: 11.0874% ( 249) 00:32:03.458 1899.055 - 1906.502: 11.5875% ( 267) 00:32:03.458 1906.502 - 1921.396: 12.5073% ( 491) 00:32:03.458 1921.396 - 1936.291: 13.5356% ( 549) 00:32:03.458 1936.291 - 1951.185: 14.5528% ( 543) 00:32:03.458 1951.185 - 1966.080: 16.0738% ( 812) 00:32:03.458 1966.080 - 1980.975: 17.3232% ( 667) 00:32:03.458 1980.975 - 1995.869: 18.5033% ( 630) 00:32:03.458 1995.869 - 2010.764: 19.7059% ( 642) 00:32:03.458 2010.764 - 2025.658: 21.1913% ( 793) 00:32:03.458 2025.658 - 2040.553: 22.9240% ( 925) 00:32:03.458 2040.553 - 2055.447: 24.7991% ( 1001) 00:32:03.458 2055.447 - 2070.342: 26.3314% ( 818) 00:32:03.458 2070.342 - 2085.236: 27.9217% ( 849) 00:32:03.458 2085.236 - 2100.131: 29.7125% ( 956) 00:32:03.458 2100.131 - 2115.025: 31.4733% ( 940) 00:32:03.458 2115.025 - 2129.920: 33.1479% ( 894) 00:32:03.458 2129.920 - 2144.815: 34.8712% ( 920) 00:32:03.458 2144.815 - 2159.709: 36.5421% ( 892) 00:32:03.459 2159.709 - 2174.604: 38.2242% ( 898) 00:32:03.459 2174.604 - 2189.498: 39.8558% ( 871) 00:32:03.459 2189.498 - 2204.393: 41.5566% ( 908) 00:32:03.459 2204.393 - 2219.287: 43.1844% ( 869) 00:32:03.459 2219.287 - 2234.182: 44.8516% ( 890) 00:32:03.459 2234.182 - 2249.076: 46.7510% ( 1014) 00:32:03.459 2249.076 - 2263.971: 48.4893% ( 928) 00:32:03.459 2263.971 - 2278.865: 49.8829% ( 744) 00:32:03.459 2278.865 - 2293.760: 51.3047% ( 759) 00:32:03.459 2293.760 - 2308.655: 52.7470% ( 770) 00:32:03.459 2308.655 - 2323.549: 54.1856% ( 768) 00:32:03.459 2323.549 - 2338.444: 55.5381% ( 722) 00:32:03.459 2338.444 - 2353.338: 56.8006% ( 674) 00:32:03.459 2353.338 - 2368.233: 57.9545% ( 616) 00:32:03.459 2368.233 - 2383.127: 59.0672% ( 594) 00:32:03.459 2383.127 - 2398.022: 60.2491% ( 631) 00:32:03.459 2398.022 - 2412.916: 61.4143% ( 622) 00:32:03.459 2412.916 - 2427.811: 62.4726% ( 565) 00:32:03.459 2427.811 - 2442.705: 63.4354% ( 514) 00:32:03.459 2442.705 - 2457.600: 64.3926% ( 511) 00:32:03.459 2457.600 - 2472.495: 65.3723% ( 523) 00:32:03.459 2472.495 - 2487.389: 66.3407% ( 517) 00:32:03.459 2487.389 - 2502.284: 67.3073% ( 516) 00:32:03.459 2502.284 - 2517.178: 68.2645% ( 511) 00:32:03.459 2517.178 - 2532.073: 69.2273% ( 514) 00:32:03.459 2532.073 - 2546.967: 70.1452% ( 490) 00:32:03.459 2546.967 - 2561.862: 71.0462% ( 481) 00:32:03.459 2561.862 - 2576.756: 71.9416% ( 478) 00:32:03.459 2576.756 - 2591.651: 72.8013% ( 459) 00:32:03.459 2591.651 - 2606.545: 73.6555% ( 456) 00:32:03.459 2606.545 - 2621.440: 74.4535% ( 426) 00:32:03.459 2621.440 - 2636.335: 75.2740% ( 438) 00:32:03.459 2636.335 - 2651.229: 75.9951% ( 385) 00:32:03.459 2651.229 - 2666.124: 76.6807% ( 366) 00:32:03.459 2666.124 - 2681.018: 77.3832% ( 375) 00:32:03.459 2681.018 - 2695.913: 78.0051% ( 332) 00:32:03.459 2695.913 - 2710.807: 78.6476% ( 343) 00:32:03.459 2710.807 - 2725.702: 79.3107% ( 354) 00:32:03.459 2725.702 - 2740.596: 79.9213% ( 326) 00:32:03.459 2740.596 - 2755.491: 80.4777% ( 297) 00:32:03.459 2755.491 - 2770.385: 81.0565% ( 309) 00:32:03.459 2770.385 - 2785.280: 81.5753% ( 277) 00:32:03.459 2785.280 - 2800.175: 82.1279% ( 295) 00:32:03.459 2800.175 - 2815.069: 82.6187% ( 262) 00:32:03.459 2815.069 - 2829.964: 83.1245% ( 270) 00:32:03.459 2829.964 - 2844.858: 83.6209% ( 265) 00:32:03.459 2844.858 - 2859.753: 84.0798% ( 245) 00:32:03.459 2859.753 - 2874.647: 84.5518% ( 252) 00:32:03.459 2874.647 - 2889.542: 84.9920% ( 235) 00:32:03.459 2889.542 - 2904.436: 85.4772% ( 259) 00:32:03.459 2904.436 - 2919.331: 85.8818% ( 216) 00:32:03.459 2919.331 - 2934.225: 86.3145% ( 231) 00:32:03.459 2934.225 - 2949.120: 86.7116% ( 212) 00:32:03.459 2949.120 - 2964.015: 87.1069% ( 211) 00:32:03.459 2964.015 - 2978.909: 87.5152% ( 218) 00:32:03.459 2978.909 - 2993.804: 87.9086% ( 210) 00:32:03.459 2993.804 - 3008.698: 88.2664% ( 191) 00:32:03.459 3008.698 - 3023.593: 88.6185% ( 188) 00:32:03.459 3023.593 - 3038.487: 88.9538% ( 179) 00:32:03.459 3038.487 - 3053.382: 89.3097% ( 190) 00:32:03.459 3053.382 - 3068.276: 89.6338% ( 173) 00:32:03.459 3068.276 - 3083.171: 89.9410% ( 164) 00:32:03.459 3083.171 - 3098.065: 90.2576% ( 169) 00:32:03.459 3098.065 - 3112.960: 90.5723% ( 168) 00:32:03.459 3112.960 - 3127.855: 90.8795% ( 164) 00:32:03.459 3127.855 - 3142.749: 91.1586% ( 149) 00:32:03.459 3142.749 - 3157.644: 91.4358% ( 148) 00:32:03.459 3157.644 - 3172.538: 91.7074% ( 145) 00:32:03.459 3172.538 - 3187.433: 91.9809% ( 146) 00:32:03.459 3187.433 - 3202.327: 92.2319% ( 134) 00:32:03.459 3202.327 - 3217.222: 92.5054% ( 146) 00:32:03.459 3217.222 - 3232.116: 92.7583% ( 135) 00:32:03.459 3232.116 - 3247.011: 93.0205% ( 140) 00:32:03.459 3247.011 - 3261.905: 93.2828% ( 140) 00:32:03.459 3261.905 - 3276.800: 93.5225% ( 128) 00:32:03.459 3276.800 - 3291.695: 93.7548% ( 124) 00:32:03.459 3291.695 - 3306.589: 93.9927% ( 127) 00:32:03.459 3306.589 - 3321.484: 94.2231% ( 123) 00:32:03.459 3321.484 - 3336.378: 94.4479% ( 120) 00:32:03.459 3336.378 - 3351.273: 94.6689% ( 118) 00:32:03.459 3351.273 - 3366.167: 94.8974% ( 122) 00:32:03.459 3366.167 - 3381.062: 95.0997% ( 108) 00:32:03.459 3381.062 - 3395.956: 95.3077% ( 111) 00:32:03.459 3395.956 - 3410.851: 95.5081% ( 107) 00:32:03.459 3410.851 - 3425.745: 95.7123% ( 109) 00:32:03.459 3425.745 - 3440.640: 95.9090% ( 105) 00:32:03.459 3440.640 - 3455.535: 96.1094% ( 107) 00:32:03.459 3455.535 - 3470.429: 96.2855% ( 94) 00:32:03.459 3470.429 - 3485.324: 96.4597% ( 93) 00:32:03.459 3485.324 - 3500.218: 96.6358% ( 94) 00:32:03.459 3500.218 - 3515.113: 96.8081% ( 92) 00:32:03.459 3515.113 - 3530.007: 96.9786% ( 91) 00:32:03.459 3530.007 - 3544.902: 97.1340% ( 83) 00:32:03.459 3544.902 - 3559.796: 97.2858% ( 81) 00:32:03.459 3559.796 - 3574.691: 97.4319% ( 78) 00:32:03.459 3574.691 - 3589.585: 97.5873% ( 83) 00:32:03.459 3589.585 - 3604.480: 97.7185% ( 70) 00:32:03.459 3604.480 - 3619.375: 97.8402% ( 65) 00:32:03.459 3619.375 - 3634.269: 97.9657% ( 67) 00:32:03.459 3634.269 - 3649.164: 98.0725% ( 57) 00:32:03.459 3649.164 - 3664.058: 98.1793% ( 57) 00:32:03.459 3664.058 - 3678.953: 98.2785% ( 53) 00:32:03.459 3678.953 - 3693.847: 98.3797% ( 54) 00:32:03.459 3693.847 - 3708.742: 98.4734% ( 50) 00:32:03.459 3708.742 - 3723.636: 98.5595% ( 46) 00:32:03.459 3723.636 - 3738.531: 98.6382% ( 42) 00:32:03.459 3738.531 - 3753.425: 98.7225% ( 45) 00:32:03.459 3753.425 - 3768.320: 98.7824% ( 32) 00:32:03.459 3768.320 - 3783.215: 98.8405% ( 31) 00:32:03.459 3783.215 - 3798.109: 98.8986% ( 31) 00:32:03.459 3798.109 - 3813.004: 98.9491% ( 27) 00:32:03.459 3813.004 - 3842.793: 99.0259% ( 41) 00:32:03.459 3842.793 - 3872.582: 99.0953% ( 37) 00:32:03.459 3872.582 - 3902.371: 99.1533% ( 31) 00:32:03.459 3902.371 - 3932.160: 99.2039% ( 27) 00:32:03.459 3932.160 - 3961.949: 99.2489% ( 24) 00:32:03.459 3961.949 - 3991.738: 99.2788% ( 16) 00:32:03.459 3991.738 - 4021.527: 99.3088% ( 16) 00:32:03.459 4021.527 - 4051.316: 99.3388% ( 16) 00:32:03.459 4051.316 - 4081.105: 99.3575% ( 10) 00:32:03.459 4081.105 - 4110.895: 99.3744% ( 9) 00:32:03.459 4110.895 - 4140.684: 99.3987% ( 13) 00:32:03.459 4140.684 - 4170.473: 99.4212% ( 12) 00:32:03.459 4170.473 - 4200.262: 99.4437% ( 12) 00:32:03.459 4200.262 - 4230.051: 99.4718% ( 15) 00:32:03.459 4230.051 - 4259.840: 99.4924% ( 11) 00:32:03.459 4259.840 - 4289.629: 99.5111% ( 10) 00:32:03.459 4289.629 - 4319.418: 99.5373% ( 14) 00:32:03.459 4319.418 - 4349.207: 99.5617% ( 13) 00:32:03.459 4349.207 - 4378.996: 99.5823% ( 11) 00:32:03.459 4378.996 - 4408.785: 99.5991% ( 9) 00:32:03.459 4408.785 - 4438.575: 99.6160% ( 9) 00:32:03.459 4438.575 - 4468.364: 99.6310% ( 8) 00:32:03.459 4468.364 - 4498.153: 99.6478% ( 9) 00:32:03.459 4498.153 - 4527.942: 99.6610% ( 7) 00:32:03.459 4527.942 - 4557.731: 99.6722% ( 6) 00:32:03.459 4557.731 - 4587.520: 99.6816% ( 5) 00:32:03.459 4587.520 - 4617.309: 99.6909% ( 5) 00:32:03.459 4617.309 - 4647.098: 99.7022% ( 6) 00:32:03.459 4647.098 - 4676.887: 99.7153% ( 7) 00:32:03.459 4676.887 - 4706.676: 99.7265% ( 6) 00:32:03.459 4706.676 - 4736.465: 99.7378% ( 6) 00:32:03.459 4736.465 - 4766.255: 99.7527% ( 8) 00:32:03.459 4766.255 - 4796.044: 99.7640% ( 6) 00:32:03.459 4796.044 - 4825.833: 99.7733% ( 5) 00:32:03.459 4825.833 - 4855.622: 99.7827% ( 5) 00:32:03.459 4855.622 - 4885.411: 99.7902% ( 4) 00:32:03.459 4885.411 - 4915.200: 99.7996% ( 5) 00:32:03.459 4915.200 - 4944.989: 99.8052% ( 3) 00:32:03.459 4944.989 - 4974.778: 99.8146% ( 5) 00:32:03.459 4974.778 - 5004.567: 99.8239% ( 5) 00:32:03.459 5004.567 - 5034.356: 99.8314% ( 4) 00:32:03.459 5034.356 - 5064.145: 99.8370% ( 3) 00:32:03.459 5064.145 - 5093.935: 99.8464% ( 5) 00:32:03.459 5093.935 - 5123.724: 99.8539% ( 4) 00:32:03.459 5123.724 - 5153.513: 99.8614% ( 4) 00:32:03.459 5153.513 - 5183.302: 99.8708% ( 5) 00:32:03.459 5183.302 - 5213.091: 99.8782% ( 4) 00:32:03.459 5213.091 - 5242.880: 99.8857% ( 4) 00:32:03.459 5242.880 - 5272.669: 99.8932% ( 4) 00:32:03.459 5272.669 - 5302.458: 99.9007% ( 4) 00:32:03.459 5302.458 - 5332.247: 99.9082% ( 4) 00:32:03.459 5332.247 - 5362.036: 99.9176% ( 5) 00:32:03.459 5362.036 - 5391.825: 99.9251% ( 4) 00:32:03.459 5391.825 - 5421.615: 99.9307% ( 3) 00:32:03.459 5421.615 - 5451.404: 99.9401% ( 5) 00:32:03.459 5451.404 - 5481.193: 99.9457% ( 3) 00:32:03.459 5481.193 - 5510.982: 99.9513% ( 3) 00:32:03.459 5510.982 - 5540.771: 99.9569% ( 3) 00:32:03.459 5540.771 - 5570.560: 99.9607% ( 2) 00:32:03.459 5570.560 - 5600.349: 99.9644% ( 2) 00:32:03.459 5600.349 - 5630.138: 99.9700% ( 3) 00:32:03.459 5630.138 - 5659.927: 99.9738% ( 2) 00:32:03.459 5659.927 - 5689.716: 99.9775% ( 2) 00:32:03.459 5689.716 - 5719.505: 99.9831% ( 3) 00:32:03.459 5719.505 - 5749.295: 99.9850% ( 1) 00:32:03.459 7298.327 - 7328.116: 99.9869% ( 1) 00:32:03.459 7357.905 - 7387.695: 99.9888% ( 1) 00:32:03.459 7387.695 - 7417.484: 99.9906% ( 1) 00:32:03.459 7417.484 - 7447.273: 99.9925% ( 1) 00:32:03.459 7477.062 - 7506.851: 99.9944% ( 1) 00:32:03.459 7506.851 - 7536.640: 99.9963% ( 1) 00:32:03.459 7536.640 - 7566.429: 100.0000% ( 2) 00:32:03.459 00:32:03.459 12:16:08 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:32:03.460 00:32:03.460 real 0m2.616s 00:32:03.460 user 0m2.244s 00:32:03.460 sys 0m0.198s 00:32:03.460 ************************************ 00:32:03.460 12:16:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:03.460 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:32:03.460 END TEST nvme_perf 00:32:03.460 ************************************ 00:32:03.460 12:16:08 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:03.460 12:16:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:32:03.460 12:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:03.460 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:32:03.460 ************************************ 00:32:03.460 START TEST nvme_hello_world 00:32:03.460 ************************************ 00:32:03.460 12:16:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:03.718 Initializing NVMe Controllers 00:32:03.718 Attached to 0000:00:06.0 00:32:03.718 Namespace ID: 1 size: 5GB 00:32:03.718 Initialization complete. 00:32:03.718 INFO: using host memory buffer for IO 00:32:03.718 Hello world! 00:32:03.718 00:32:03.718 real 0m0.298s 00:32:03.718 user 0m0.086s 00:32:03.718 sys 0m0.119s 00:32:03.718 12:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:03.718 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:32:03.718 ************************************ 00:32:03.718 END TEST nvme_hello_world 00:32:03.718 ************************************ 00:32:03.718 12:16:09 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:03.718 12:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:03.718 12:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:03.718 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:32:03.718 ************************************ 00:32:03.718 START TEST nvme_sgl 00:32:03.718 ************************************ 00:32:03.718 12:16:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:03.977 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:32:03.977 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:32:03.977 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:32:03.977 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:32:03.977 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:32:03.977 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:32:03.977 NVMe Readv/Writev Request test 00:32:03.977 Attached to 0000:00:06.0 00:32:03.977 0000:00:06.0: build_io_request_2 test passed 00:32:03.977 0000:00:06.0: build_io_request_4 test passed 00:32:03.977 0000:00:06.0: build_io_request_5 test passed 00:32:03.977 0000:00:06.0: build_io_request_6 test passed 00:32:03.977 0000:00:06.0: build_io_request_7 test passed 00:32:03.977 0000:00:06.0: build_io_request_10 test passed 00:32:03.977 Cleaning up... 00:32:03.977 00:32:03.977 real 0m0.332s 00:32:03.977 user 0m0.137s 00:32:03.977 sys 0m0.109s 00:32:03.977 12:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:03.977 ************************************ 00:32:03.977 END TEST nvme_sgl 00:32:03.977 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:32:03.977 ************************************ 00:32:04.235 12:16:09 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:04.235 12:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:04.235 12:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:04.235 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:32:04.235 ************************************ 00:32:04.235 START TEST nvme_e2edp 00:32:04.235 ************************************ 00:32:04.235 12:16:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:04.493 NVMe Write/Read with End-to-End data protection test 00:32:04.493 Attached to 0000:00:06.0 00:32:04.493 Cleaning up... 00:32:04.493 00:32:04.493 real 0m0.267s 00:32:04.493 user 0m0.100s 00:32:04.493 sys 0m0.101s 00:32:04.493 12:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:04.493 ************************************ 00:32:04.493 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:32:04.493 END TEST nvme_e2edp 00:32:04.493 ************************************ 00:32:04.493 12:16:09 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:04.493 12:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:04.493 12:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:04.493 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:32:04.493 ************************************ 00:32:04.493 START TEST nvme_reserve 00:32:04.493 ************************************ 00:32:04.494 12:16:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:04.751 ===================================================== 00:32:04.751 NVMe Controller at PCI bus 0, device 6, function 0 00:32:04.751 ===================================================== 00:32:04.751 Reservations: Not Supported 00:32:04.751 Reservation test passed 00:32:04.751 00:32:04.751 real 0m0.291s 00:32:04.751 user 0m0.110s 00:32:04.751 sys 0m0.090s 00:32:04.751 12:16:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:04.751 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:32:04.751 ************************************ 00:32:04.751 END TEST nvme_reserve 00:32:04.751 ************************************ 00:32:04.751 12:16:10 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:04.751 12:16:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:04.751 12:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:04.751 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:32:04.751 ************************************ 00:32:04.751 START TEST nvme_err_injection 00:32:04.751 ************************************ 00:32:04.752 12:16:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:05.010 NVMe Error Injection test 00:32:05.010 Attached to 0000:00:06.0 00:32:05.010 0000:00:06.0: get features failed as expected 00:32:05.010 0000:00:06.0: get features successfully as expected 00:32:05.010 0000:00:06.0: read failed as expected 00:32:05.010 0000:00:06.0: read successfully as expected 00:32:05.010 Cleaning up... 00:32:05.010 00:32:05.010 real 0m0.295s 00:32:05.010 user 0m0.096s 00:32:05.010 sys 0m0.113s 00:32:05.010 12:16:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:05.010 ************************************ 00:32:05.010 END TEST nvme_err_injection 00:32:05.010 ************************************ 00:32:05.010 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 12:16:10 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:05.010 12:16:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:32:05.010 12:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:05.010 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 ************************************ 00:32:05.010 START TEST nvme_overhead 00:32:05.010 ************************************ 00:32:05.010 12:16:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:06.388 Initializing NVMe Controllers 00:32:06.388 Attached to 0000:00:06.0 00:32:06.388 Initialization complete. Launching workers. 00:32:06.388 submit (in ns) avg, min, max = 15409.5, 12505.5, 90149.1 00:32:06.388 complete (in ns) avg, min, max = 10032.6, 8960.0, 82263.2 00:32:06.388 00:32:06.388 Submit histogram 00:32:06.388 ================ 00:32:06.388 Range in us Cumulative Count 00:32:06.388 12.451 - 12.509: 0.0112% ( 1) 00:32:06.388 12.625 - 12.684: 0.0223% ( 1) 00:32:06.388 13.964 - 14.022: 0.3909% ( 33) 00:32:06.388 14.022 - 14.080: 2.0775% ( 151) 00:32:06.388 14.080 - 14.138: 7.6734% ( 501) 00:32:06.388 14.138 - 14.196: 19.0886% ( 1022) 00:32:06.388 14.196 - 14.255: 33.9104% ( 1327) 00:32:06.388 14.255 - 14.313: 46.1968% ( 1100) 00:32:06.388 14.313 - 14.371: 53.6021% ( 663) 00:32:06.388 14.371 - 14.429: 57.8130% ( 377) 00:32:06.388 14.429 - 14.487: 60.6612% ( 255) 00:32:06.388 14.487 - 14.545: 62.9956% ( 209) 00:32:06.388 14.545 - 14.604: 65.3971% ( 215) 00:32:06.388 14.604 - 14.662: 67.2847% ( 169) 00:32:06.388 14.662 - 14.720: 68.5022% ( 109) 00:32:06.388 14.720 - 14.778: 69.3399% ( 75) 00:32:06.388 14.778 - 14.836: 70.0659% ( 65) 00:32:06.388 14.836 - 14.895: 70.5797% ( 46) 00:32:06.388 14.895 - 15.011: 71.5849% ( 90) 00:32:06.388 15.011 - 15.127: 72.3333% ( 67) 00:32:06.388 15.127 - 15.244: 72.7912% ( 41) 00:32:06.388 15.244 - 15.360: 73.0370% ( 22) 00:32:06.388 15.360 - 15.476: 73.1598% ( 11) 00:32:06.388 15.476 - 15.593: 73.2715% ( 10) 00:32:06.388 15.593 - 15.709: 73.3721% ( 9) 00:32:06.388 15.709 - 15.825: 73.4614% ( 8) 00:32:06.388 15.825 - 15.942: 73.5284% ( 6) 00:32:06.388 15.942 - 16.058: 73.6178% ( 8) 00:32:06.388 16.058 - 16.175: 73.7071% ( 8) 00:32:06.388 16.175 - 16.291: 73.7518% ( 4) 00:32:06.388 16.291 - 16.407: 73.8077% ( 5) 00:32:06.388 16.407 - 16.524: 73.8523% ( 4) 00:32:06.388 16.524 - 16.640: 73.8747% ( 2) 00:32:06.388 16.640 - 16.756: 73.9194% ( 4) 00:32:06.388 16.756 - 16.873: 73.9529% ( 3) 00:32:06.388 16.873 - 16.989: 74.0199% ( 6) 00:32:06.388 16.989 - 17.105: 74.5895% ( 51) 00:32:06.388 17.105 - 17.222: 77.3931% ( 251) 00:32:06.388 17.222 - 17.338: 82.2853% ( 438) 00:32:06.388 17.338 - 17.455: 87.6354% ( 479) 00:32:06.388 17.455 - 17.571: 90.3273% ( 241) 00:32:06.388 17.571 - 17.687: 92.0139% ( 151) 00:32:06.388 17.687 - 17.804: 93.4770% ( 131) 00:32:06.388 17.804 - 17.920: 94.1807% ( 63) 00:32:06.388 17.920 - 18.036: 94.6387% ( 41) 00:32:06.388 18.036 - 18.153: 95.0408% ( 36) 00:32:06.388 18.153 - 18.269: 95.3535% ( 28) 00:32:06.388 18.269 - 18.385: 95.5546% ( 18) 00:32:06.388 18.385 - 18.502: 95.8115% ( 23) 00:32:06.388 18.502 - 18.618: 95.8896% ( 7) 00:32:06.388 18.618 - 18.735: 96.0237% ( 12) 00:32:06.388 18.735 - 18.851: 96.1019% ( 7) 00:32:06.388 18.851 - 18.967: 96.2024% ( 9) 00:32:06.388 18.967 - 19.084: 96.2917% ( 8) 00:32:06.388 19.084 - 19.200: 96.3699% ( 7) 00:32:06.388 19.200 - 19.316: 96.4481% ( 7) 00:32:06.388 19.316 - 19.433: 96.4928% ( 4) 00:32:06.388 19.433 - 19.549: 96.5710% ( 7) 00:32:06.388 19.549 - 19.665: 96.6157% ( 4) 00:32:06.388 19.665 - 19.782: 96.6938% ( 7) 00:32:06.388 19.782 - 19.898: 96.7497% ( 5) 00:32:06.388 19.898 - 20.015: 96.7720% ( 2) 00:32:06.388 20.015 - 20.131: 96.8279% ( 5) 00:32:06.388 20.131 - 20.247: 96.8614% ( 3) 00:32:06.388 20.247 - 20.364: 96.9284% ( 6) 00:32:06.388 20.364 - 20.480: 96.9843% ( 5) 00:32:06.388 20.480 - 20.596: 97.0401% ( 5) 00:32:06.388 20.596 - 20.713: 97.1071% ( 6) 00:32:06.388 20.713 - 20.829: 97.1741% ( 6) 00:32:06.388 20.829 - 20.945: 97.2411% ( 6) 00:32:06.388 20.945 - 21.062: 97.2858% ( 4) 00:32:06.388 21.062 - 21.178: 97.3640% ( 7) 00:32:06.388 21.178 - 21.295: 97.4199% ( 5) 00:32:06.388 21.295 - 21.411: 97.4757% ( 5) 00:32:06.388 21.411 - 21.527: 97.5092% ( 3) 00:32:06.388 21.527 - 21.644: 97.5874% ( 7) 00:32:06.388 21.644 - 21.760: 97.6209% ( 3) 00:32:06.388 21.760 - 21.876: 97.6656% ( 4) 00:32:06.388 21.876 - 21.993: 97.6879% ( 2) 00:32:06.388 21.993 - 22.109: 97.6991% ( 1) 00:32:06.388 22.109 - 22.225: 97.7214% ( 2) 00:32:06.388 22.225 - 22.342: 97.7661% ( 4) 00:32:06.388 22.342 - 22.458: 97.7996% ( 3) 00:32:06.388 22.458 - 22.575: 97.8778% ( 7) 00:32:06.388 22.575 - 22.691: 97.9337% ( 5) 00:32:06.388 22.691 - 22.807: 97.9448% ( 1) 00:32:06.388 22.807 - 22.924: 98.0342% ( 8) 00:32:06.388 22.924 - 23.040: 98.0789% ( 4) 00:32:06.388 23.040 - 23.156: 98.1570% ( 7) 00:32:06.388 23.156 - 23.273: 98.2017% ( 4) 00:32:06.388 23.273 - 23.389: 98.2464% ( 4) 00:32:06.388 23.389 - 23.505: 98.2911% ( 4) 00:32:06.388 23.505 - 23.622: 98.3246% ( 3) 00:32:06.388 23.622 - 23.738: 98.3916% ( 6) 00:32:06.388 23.738 - 23.855: 98.4363% ( 4) 00:32:06.388 23.855 - 23.971: 98.4921% ( 5) 00:32:06.388 23.971 - 24.087: 98.5480% ( 5) 00:32:06.388 24.087 - 24.204: 98.5703% ( 2) 00:32:06.388 24.204 - 24.320: 98.5927% ( 2) 00:32:06.388 24.320 - 24.436: 98.6485% ( 5) 00:32:06.388 24.436 - 24.553: 98.7043% ( 5) 00:32:06.388 24.553 - 24.669: 98.7155% ( 1) 00:32:06.388 24.669 - 24.785: 98.7602% ( 4) 00:32:06.388 24.785 - 24.902: 98.7714% ( 1) 00:32:06.388 24.902 - 25.018: 98.7937% ( 2) 00:32:06.388 25.018 - 25.135: 98.8384% ( 4) 00:32:06.388 25.135 - 25.251: 98.8831% ( 4) 00:32:06.388 25.251 - 25.367: 98.8942% ( 1) 00:32:06.388 25.367 - 25.484: 98.9166% ( 2) 00:32:06.388 25.484 - 25.600: 98.9389% ( 2) 00:32:06.388 25.600 - 25.716: 98.9724% ( 3) 00:32:06.388 25.716 - 25.833: 99.0059% ( 3) 00:32:06.388 25.833 - 25.949: 99.0506% ( 4) 00:32:06.388 25.949 - 26.065: 99.0841% ( 3) 00:32:06.388 26.065 - 26.182: 99.1288% ( 4) 00:32:06.388 26.182 - 26.298: 99.1735% ( 4) 00:32:06.388 26.298 - 26.415: 99.1958% ( 2) 00:32:06.388 26.415 - 26.531: 99.2181% ( 2) 00:32:06.388 26.531 - 26.647: 99.2516% ( 3) 00:32:06.388 26.647 - 26.764: 99.2628% ( 1) 00:32:06.388 26.880 - 26.996: 99.2852% ( 2) 00:32:06.388 26.996 - 27.113: 99.3187% ( 3) 00:32:06.388 27.229 - 27.345: 99.3410% ( 2) 00:32:06.388 27.345 - 27.462: 99.3857% ( 4) 00:32:06.388 27.462 - 27.578: 99.3969% ( 1) 00:32:06.388 27.578 - 27.695: 99.4304% ( 3) 00:32:06.388 27.695 - 27.811: 99.4750% ( 4) 00:32:06.388 27.811 - 27.927: 99.4974% ( 2) 00:32:06.388 27.927 - 28.044: 99.5309% ( 3) 00:32:06.388 28.044 - 28.160: 99.5979% ( 6) 00:32:06.388 28.160 - 28.276: 99.6202% ( 2) 00:32:06.388 28.509 - 28.625: 99.6649% ( 4) 00:32:06.388 28.625 - 28.742: 99.6761% ( 1) 00:32:06.388 28.975 - 29.091: 99.7096% ( 3) 00:32:06.388 29.091 - 29.207: 99.7319% ( 2) 00:32:06.388 29.207 - 29.324: 99.7543% ( 2) 00:32:06.388 29.556 - 29.673: 99.7654% ( 1) 00:32:06.388 29.789 - 30.022: 99.7766% ( 1) 00:32:06.388 30.022 - 30.255: 99.8101% ( 3) 00:32:06.388 30.255 - 30.487: 99.8436% ( 3) 00:32:06.388 30.487 - 30.720: 99.8548% ( 1) 00:32:06.388 31.418 - 31.651: 99.8771% ( 2) 00:32:06.388 31.884 - 32.116: 99.8883% ( 1) 00:32:06.388 33.280 - 33.513: 99.8995% ( 1) 00:32:06.388 35.142 - 35.375: 99.9106% ( 1) 00:32:06.388 36.073 - 36.305: 99.9218% ( 1) 00:32:06.388 36.538 - 36.771: 99.9330% ( 1) 00:32:06.388 37.469 - 37.702: 99.9442% ( 1) 00:32:06.388 39.098 - 39.331: 99.9553% ( 1) 00:32:06.388 40.262 - 40.495: 99.9665% ( 1) 00:32:06.388 41.891 - 42.124: 99.9777% ( 1) 00:32:06.388 56.553 - 56.785: 99.9888% ( 1) 00:32:06.388 89.833 - 90.298: 100.0000% ( 1) 00:32:06.388 00:32:06.388 Complete histogram 00:32:06.388 ================== 00:32:06.388 Range in us Cumulative Count 00:32:06.388 8.960 - 9.018: 0.8824% ( 79) 00:32:06.388 9.018 - 9.076: 9.2706% ( 751) 00:32:06.388 9.076 - 9.135: 28.3815% ( 1711) 00:32:06.388 9.135 - 9.193: 48.0956% ( 1765) 00:32:06.388 9.193 - 9.251: 59.8459% ( 1052) 00:32:06.388 9.251 - 9.309: 65.5870% ( 514) 00:32:06.388 9.309 - 9.367: 67.5863% ( 179) 00:32:06.388 9.367 - 9.425: 68.6250% ( 93) 00:32:06.388 9.425 - 9.484: 69.5298% ( 81) 00:32:06.388 9.484 - 9.542: 70.3563% ( 74) 00:32:06.388 9.542 - 9.600: 70.7472% ( 35) 00:32:06.388 9.600 - 9.658: 71.1493% ( 36) 00:32:06.388 9.658 - 9.716: 71.3280% ( 16) 00:32:06.388 9.716 - 9.775: 71.4174% ( 8) 00:32:06.389 9.775 - 9.833: 71.4732% ( 5) 00:32:06.389 9.833 - 9.891: 71.5068% ( 3) 00:32:06.389 9.891 - 9.949: 71.5849% ( 7) 00:32:06.389 9.949 - 10.007: 71.7748% ( 17) 00:32:06.389 10.007 - 10.065: 71.9647% ( 17) 00:32:06.389 10.065 - 10.124: 72.2998% ( 30) 00:32:06.389 10.124 - 10.182: 72.6125% ( 28) 00:32:06.389 10.182 - 10.240: 72.8359% ( 20) 00:32:06.389 10.240 - 10.298: 73.0035% ( 15) 00:32:06.389 10.298 - 10.356: 73.0816% ( 7) 00:32:06.389 10.356 - 10.415: 73.1263% ( 4) 00:32:06.389 10.415 - 10.473: 73.1710% ( 4) 00:32:06.389 10.473 - 10.531: 73.2492% ( 7) 00:32:06.389 10.531 - 10.589: 73.2715% ( 2) 00:32:06.389 10.589 - 10.647: 73.3050% ( 3) 00:32:06.389 10.647 - 10.705: 73.3385% ( 3) 00:32:06.389 10.705 - 10.764: 73.3832% ( 4) 00:32:06.389 10.764 - 10.822: 73.4167% ( 3) 00:32:06.389 10.822 - 10.880: 73.4502% ( 3) 00:32:06.389 10.880 - 10.938: 73.5061% ( 5) 00:32:06.389 10.938 - 10.996: 73.5731% ( 6) 00:32:06.389 10.996 - 11.055: 73.6290% ( 5) 00:32:06.389 11.055 - 11.113: 73.6736% ( 4) 00:32:06.389 11.113 - 11.171: 73.7183% ( 4) 00:32:06.389 11.171 - 11.229: 73.8300% ( 10) 00:32:06.389 11.229 - 11.287: 74.5225% ( 62) 00:32:06.389 11.287 - 11.345: 77.7170% ( 286) 00:32:06.389 11.345 - 11.404: 83.5362% ( 521) 00:32:06.389 11.404 - 11.462: 88.8641% ( 477) 00:32:06.389 11.462 - 11.520: 91.9133% ( 273) 00:32:06.389 11.520 - 11.578: 93.4435% ( 137) 00:32:06.389 11.578 - 11.636: 94.1919% ( 67) 00:32:06.389 11.636 - 11.695: 94.7615% ( 51) 00:32:06.389 11.695 - 11.753: 95.0408% ( 25) 00:32:06.389 11.753 - 11.811: 95.3870% ( 31) 00:32:06.389 11.811 - 11.869: 95.6439% ( 23) 00:32:06.389 11.869 - 11.927: 95.7444% ( 9) 00:32:06.389 11.927 - 11.985: 95.8450% ( 9) 00:32:06.389 11.985 - 12.044: 95.9455% ( 9) 00:32:06.389 12.044 - 12.102: 96.0237% ( 7) 00:32:06.389 12.102 - 12.160: 96.0460% ( 2) 00:32:06.389 12.160 - 12.218: 96.0907% ( 4) 00:32:06.389 12.218 - 12.276: 96.1130% ( 2) 00:32:06.389 12.276 - 12.335: 96.1354% ( 2) 00:32:06.389 12.335 - 12.393: 96.1801% ( 4) 00:32:06.389 12.393 - 12.451: 96.2247% ( 4) 00:32:06.389 12.451 - 12.509: 96.3364% ( 10) 00:32:06.389 12.509 - 12.567: 96.4816% ( 13) 00:32:06.389 12.567 - 12.625: 96.5375% ( 5) 00:32:06.389 12.625 - 12.684: 96.5933% ( 5) 00:32:06.389 12.684 - 12.742: 96.6268% ( 3) 00:32:06.389 12.742 - 12.800: 96.6492% ( 2) 00:32:06.389 12.800 - 12.858: 96.6603% ( 1) 00:32:06.389 12.858 - 12.916: 96.6827% ( 2) 00:32:06.389 12.916 - 12.975: 96.7274% ( 4) 00:32:06.389 12.975 - 13.033: 96.7609% ( 3) 00:32:06.389 13.033 - 13.091: 96.7720% ( 1) 00:32:06.389 13.149 - 13.207: 96.7944% ( 2) 00:32:06.389 13.207 - 13.265: 96.8279% ( 3) 00:32:06.389 13.265 - 13.324: 96.8726% ( 4) 00:32:06.389 13.324 - 13.382: 96.8949% ( 2) 00:32:06.389 13.382 - 13.440: 96.9507% ( 5) 00:32:06.389 13.440 - 13.498: 96.9619% ( 1) 00:32:06.389 13.498 - 13.556: 96.9954% ( 3) 00:32:06.389 13.556 - 13.615: 97.0401% ( 4) 00:32:06.389 13.615 - 13.673: 97.0513% ( 1) 00:32:06.389 13.673 - 13.731: 97.0959% ( 4) 00:32:06.389 13.731 - 13.789: 97.1071% ( 1) 00:32:06.389 13.905 - 13.964: 97.1741% ( 6) 00:32:06.389 13.964 - 14.022: 97.1965% ( 2) 00:32:06.389 14.022 - 14.080: 97.2076% ( 1) 00:32:06.389 14.080 - 14.138: 97.2411% ( 3) 00:32:06.389 14.138 - 14.196: 97.3417% ( 9) 00:32:06.389 14.196 - 14.255: 97.3864% ( 4) 00:32:06.389 14.255 - 14.313: 97.4310% ( 4) 00:32:06.389 14.313 - 14.371: 97.5427% ( 10) 00:32:06.389 14.371 - 14.429: 97.5762% ( 3) 00:32:06.389 14.429 - 14.487: 97.6097% ( 3) 00:32:06.389 14.487 - 14.545: 97.6209% ( 1) 00:32:06.389 14.545 - 14.604: 97.6432% ( 2) 00:32:06.389 14.604 - 14.662: 97.6544% ( 1) 00:32:06.389 14.662 - 14.720: 97.6768% ( 2) 00:32:06.389 14.720 - 14.778: 97.7103% ( 3) 00:32:06.389 14.836 - 14.895: 97.7214% ( 1) 00:32:06.389 14.895 - 15.011: 97.7549% ( 3) 00:32:06.389 15.011 - 15.127: 97.7996% ( 4) 00:32:06.389 15.127 - 15.244: 97.8443% ( 4) 00:32:06.389 15.244 - 15.360: 97.8890% ( 4) 00:32:06.389 15.360 - 15.476: 97.9448% ( 5) 00:32:06.389 15.476 - 15.593: 97.9895% ( 4) 00:32:06.389 15.709 - 15.825: 98.0342% ( 4) 00:32:06.389 15.825 - 15.942: 98.0677% ( 3) 00:32:06.389 15.942 - 16.058: 98.1235% ( 5) 00:32:06.389 16.058 - 16.175: 98.1347% ( 1) 00:32:06.389 16.175 - 16.291: 98.1570% ( 2) 00:32:06.389 16.291 - 16.407: 98.1906% ( 3) 00:32:06.389 16.407 - 16.524: 98.2464% ( 5) 00:32:06.389 16.524 - 16.640: 98.2799% ( 3) 00:32:06.389 16.640 - 16.756: 98.3134% ( 3) 00:32:06.389 16.756 - 16.873: 98.3804% ( 6) 00:32:06.389 16.873 - 16.989: 98.3916% ( 1) 00:32:06.389 16.989 - 17.105: 98.4028% ( 1) 00:32:06.389 17.105 - 17.222: 98.4363% ( 3) 00:32:06.389 17.222 - 17.338: 98.4586% ( 2) 00:32:06.389 17.338 - 17.455: 98.4921% ( 3) 00:32:06.389 17.455 - 17.571: 98.5145% ( 2) 00:32:06.389 17.571 - 17.687: 98.5591% ( 4) 00:32:06.389 17.687 - 17.804: 98.6038% ( 4) 00:32:06.389 17.804 - 17.920: 98.6597% ( 5) 00:32:06.389 17.920 - 18.036: 98.7379% ( 7) 00:32:06.389 18.036 - 18.153: 98.7825% ( 4) 00:32:06.389 18.153 - 18.269: 98.8160% ( 3) 00:32:06.389 18.269 - 18.385: 98.8384% ( 2) 00:32:06.389 18.385 - 18.502: 98.8831% ( 4) 00:32:06.389 18.502 - 18.618: 98.9501% ( 6) 00:32:06.389 18.618 - 18.735: 98.9948% ( 4) 00:32:06.389 18.735 - 18.851: 99.0171% ( 2) 00:32:06.389 18.851 - 18.967: 99.0618% ( 4) 00:32:06.389 18.967 - 19.084: 99.1064% ( 4) 00:32:06.389 19.084 - 19.200: 99.1176% ( 1) 00:32:06.389 19.200 - 19.316: 99.1400% ( 2) 00:32:06.389 19.316 - 19.433: 99.1735% ( 3) 00:32:06.389 19.433 - 19.549: 99.1958% ( 2) 00:32:06.389 19.549 - 19.665: 99.2181% ( 2) 00:32:06.389 19.665 - 19.782: 99.2293% ( 1) 00:32:06.389 19.782 - 19.898: 99.2516% ( 2) 00:32:06.389 19.898 - 20.015: 99.2740% ( 2) 00:32:06.389 20.015 - 20.131: 99.2963% ( 2) 00:32:06.389 20.131 - 20.247: 99.3187% ( 2) 00:32:06.389 20.247 - 20.364: 99.3298% ( 1) 00:32:06.389 20.364 - 20.480: 99.3410% ( 1) 00:32:06.389 20.480 - 20.596: 99.3857% ( 4) 00:32:06.389 20.596 - 20.713: 99.4080% ( 2) 00:32:06.389 20.713 - 20.829: 99.4304% ( 2) 00:32:06.389 20.945 - 21.062: 99.4415% ( 1) 00:32:06.389 21.062 - 21.178: 99.4750% ( 3) 00:32:06.389 21.178 - 21.295: 99.4974% ( 2) 00:32:06.389 21.295 - 21.411: 99.5085% ( 1) 00:32:06.389 21.411 - 21.527: 99.5532% ( 4) 00:32:06.389 21.527 - 21.644: 99.5644% ( 1) 00:32:06.389 21.644 - 21.760: 99.5979% ( 3) 00:32:06.389 21.993 - 22.109: 99.6091% ( 1) 00:32:06.389 22.225 - 22.342: 99.6426% ( 3) 00:32:06.389 22.342 - 22.458: 99.6761% ( 3) 00:32:06.389 22.458 - 22.575: 99.6873% ( 1) 00:32:06.389 22.575 - 22.691: 99.6984% ( 1) 00:32:06.389 22.807 - 22.924: 99.7096% ( 1) 00:32:06.389 22.924 - 23.040: 99.7543% ( 4) 00:32:06.389 23.156 - 23.273: 99.7654% ( 1) 00:32:06.389 23.505 - 23.622: 99.7766% ( 1) 00:32:06.389 23.622 - 23.738: 99.7878% ( 1) 00:32:06.389 24.204 - 24.320: 99.7990% ( 1) 00:32:06.389 25.600 - 25.716: 99.8101% ( 1) 00:32:06.389 26.298 - 26.415: 99.8213% ( 1) 00:32:06.389 26.647 - 26.764: 99.8325% ( 1) 00:32:06.389 27.927 - 28.044: 99.8436% ( 1) 00:32:06.389 28.858 - 28.975: 99.8548% ( 1) 00:32:06.389 29.091 - 29.207: 99.8660% ( 1) 00:32:06.389 29.673 - 29.789: 99.8771% ( 1) 00:32:06.389 31.884 - 32.116: 99.8995% ( 2) 00:32:06.389 36.073 - 36.305: 99.9106% ( 1) 00:32:06.389 40.495 - 40.727: 99.9218% ( 1) 00:32:06.389 41.193 - 41.425: 99.9330% ( 1) 00:32:06.389 43.753 - 43.985: 99.9442% ( 1) 00:32:06.389 47.709 - 47.942: 99.9553% ( 1) 00:32:06.389 53.760 - 53.993: 99.9665% ( 1) 00:32:06.389 55.156 - 55.389: 99.9777% ( 1) 00:32:06.389 73.076 - 73.542: 99.9888% ( 1) 00:32:06.389 81.920 - 82.385: 100.0000% ( 1) 00:32:06.389 00:32:06.389 00:32:06.389 real 0m1.270s 00:32:06.389 user 0m1.108s 00:32:06.389 sys 0m0.093s 00:32:06.389 12:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:06.389 ************************************ 00:32:06.389 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:32:06.389 END TEST nvme_overhead 00:32:06.389 ************************************ 00:32:06.389 12:16:11 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:06.389 12:16:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:32:06.389 12:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:06.389 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:32:06.389 ************************************ 00:32:06.389 START TEST nvme_arbitration 00:32:06.389 ************************************ 00:32:06.389 12:16:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:09.671 Initializing NVMe Controllers 00:32:09.671 Attached to 0000:00:06.0 00:32:09.671 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:32:09.671 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:32:09.671 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:32:09.671 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:32:09.671 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:32:09.671 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:32:09.671 Initialization complete. Launching workers. 00:32:09.671 Starting thread on core 1 with urgent priority queue 00:32:09.671 Starting thread on core 2 with urgent priority queue 00:32:09.671 Starting thread on core 3 with urgent priority queue 00:32:09.671 Starting thread on core 0 with urgent priority queue 00:32:09.671 QEMU NVMe Ctrl (12340 ) core 0: 4950.33 IO/s 20.20 secs/100000 ios 00:32:09.671 QEMU NVMe Ctrl (12340 ) core 1: 4913.67 IO/s 20.35 secs/100000 ios 00:32:09.671 QEMU NVMe Ctrl (12340 ) core 2: 2957.00 IO/s 33.82 secs/100000 ios 00:32:09.671 QEMU NVMe Ctrl (12340 ) core 3: 2721.00 IO/s 36.75 secs/100000 ios 00:32:09.671 ======================================================== 00:32:09.671 00:32:09.671 00:32:09.671 real 0m3.361s 00:32:09.671 user 0m9.209s 00:32:09.671 sys 0m0.130s 00:32:09.671 12:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:09.671 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:32:09.671 ************************************ 00:32:09.671 END TEST nvme_arbitration 00:32:09.671 ************************************ 00:32:09.929 12:16:15 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:32:09.929 12:16:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:32:09.929 12:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:09.929 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:32:09.929 ************************************ 00:32:09.929 START TEST nvme_single_aen 00:32:09.929 ************************************ 00:32:09.929 12:16:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:32:09.929 [2024-11-29 12:16:15.240459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:09.929 [2024-11-29 12:16:15.240590] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.929 [2024-11-29 12:16:15.405508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:10.292 Asynchronous Event Request test 00:32:10.292 Attached to 0000:00:06.0 00:32:10.292 Reset controller to setup AER completions for this process 00:32:10.292 Registering asynchronous event callbacks... 00:32:10.292 Getting orig temperature thresholds of all controllers 00:32:10.292 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:10.292 Setting all controllers temperature threshold low to trigger AER 00:32:10.292 Waiting for all controllers temperature threshold to be set lower 00:32:10.292 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:10.292 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:32:10.292 Waiting for all controllers to trigger AER and reset threshold 00:32:10.292 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:10.292 Cleaning up... 00:32:10.292 00:32:10.292 real 0m0.246s 00:32:10.292 user 0m0.081s 00:32:10.292 sys 0m0.102s 00:32:10.292 12:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:10.292 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:32:10.292 ************************************ 00:32:10.292 END TEST nvme_single_aen 00:32:10.292 ************************************ 00:32:10.292 12:16:15 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:32:10.292 12:16:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:10.292 12:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:10.292 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:32:10.292 ************************************ 00:32:10.292 START TEST nvme_doorbell_aers 00:32:10.292 ************************************ 00:32:10.292 12:16:15 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:32:10.292 12:16:15 -- nvme/nvme.sh@70 -- # bdfs=() 00:32:10.292 12:16:15 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:32:10.292 12:16:15 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:32:10.292 12:16:15 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:32:10.292 12:16:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:32:10.292 12:16:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:32:10.292 12:16:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:10.292 12:16:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:32:10.292 12:16:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:10.292 12:16:15 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:32:10.292 12:16:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:32:10.292 12:16:15 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:32:10.292 12:16:15 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:10.573 [2024-11-29 12:16:15.820539] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149898) is not found. Dropping the request. 00:32:20.550 Executing: test_write_invalid_db 00:32:20.550 Waiting for AER completion... 00:32:20.550 Failure: test_write_invalid_db 00:32:20.550 00:32:20.550 Executing: test_invalid_db_write_overflow_sq 00:32:20.550 Waiting for AER completion... 00:32:20.550 Failure: test_invalid_db_write_overflow_sq 00:32:20.550 00:32:20.550 Executing: test_invalid_db_write_overflow_cq 00:32:20.550 Waiting for AER completion... 00:32:20.550 Failure: test_invalid_db_write_overflow_cq 00:32:20.550 00:32:20.550 00:32:20.550 real 0m10.113s 00:32:20.550 user 0m8.621s 00:32:20.550 sys 0m1.408s 00:32:20.550 12:16:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:20.550 ************************************ 00:32:20.550 END TEST nvme_doorbell_aers 00:32:20.550 ************************************ 00:32:20.550 12:16:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 12:16:25 -- nvme/nvme.sh@97 -- # uname 00:32:20.550 12:16:25 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:32:20.550 12:16:25 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:32:20.550 12:16:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:32:20.550 12:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:20.550 12:16:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 ************************************ 00:32:20.550 START TEST nvme_multi_aen 00:32:20.550 ************************************ 00:32:20.550 12:16:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:32:20.550 [2024-11-29 12:16:25.704601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:20.550 [2024-11-29 12:16:25.704771] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.550 [2024-11-29 12:16:25.909464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:20.550 [2024-11-29 12:16:25.909544] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149898) is not found. Dropping the request. 00:32:20.550 [2024-11-29 12:16:25.909638] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149898) is not found. Dropping the request. 00:32:20.550 [2024-11-29 12:16:25.909666] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149898) is not found. Dropping the request. 00:32:20.550 [2024-11-29 12:16:25.915893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:20.550 Child process pid: 150086 00:32:20.550 [2024-11-29 12:16:25.916140] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.809 [Child] Asynchronous Event Request test 00:32:20.809 [Child] Attached to 0000:00:06.0 00:32:20.809 [Child] Registering asynchronous event callbacks... 00:32:20.809 [Child] Getting orig temperature thresholds of all controllers 00:32:20.809 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:20.809 [Child] Waiting for all controllers to trigger AER and reset threshold 00:32:20.809 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:20.809 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:20.809 [Child] Cleaning up... 00:32:20.809 Asynchronous Event Request test 00:32:20.809 Attached to 0000:00:06.0 00:32:20.809 Reset controller to setup AER completions for this process 00:32:20.809 Registering asynchronous event callbacks... 00:32:20.809 Getting orig temperature thresholds of all controllers 00:32:20.809 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:20.809 Setting all controllers temperature threshold low to trigger AER 00:32:20.809 Waiting for all controllers temperature threshold to be set lower 00:32:20.809 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:20.809 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:32:20.809 Waiting for all controllers to trigger AER and reset threshold 00:32:20.809 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:20.809 Cleaning up... 00:32:20.809 00:32:20.809 real 0m0.582s 00:32:20.809 user 0m0.191s 00:32:20.809 sys 0m0.221s 00:32:20.809 12:16:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:20.809 ************************************ 00:32:20.809 END TEST nvme_multi_aen 00:32:20.809 ************************************ 00:32:20.809 12:16:26 -- common/autotest_common.sh@10 -- # set +x 00:32:20.809 12:16:26 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:20.809 12:16:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:32:20.809 12:16:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:20.809 12:16:26 -- common/autotest_common.sh@10 -- # set +x 00:32:20.809 ************************************ 00:32:20.809 START TEST nvme_startup 00:32:20.809 ************************************ 00:32:20.810 12:16:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:21.068 Initializing NVMe Controllers 00:32:21.068 Attached to 0000:00:06.0 00:32:21.068 Initialization complete. 00:32:21.068 Time used:200373.172 (us). 00:32:21.068 00:32:21.068 real 0m0.275s 00:32:21.068 user 0m0.096s 00:32:21.068 sys 0m0.092s 00:32:21.068 12:16:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:21.068 ************************************ 00:32:21.068 12:16:26 -- common/autotest_common.sh@10 -- # set +x 00:32:21.068 END TEST nvme_startup 00:32:21.068 ************************************ 00:32:21.327 12:16:26 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:32:21.327 12:16:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:21.327 12:16:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:21.327 12:16:26 -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 ************************************ 00:32:21.327 START TEST nvme_multi_secondary 00:32:21.327 ************************************ 00:32:21.327 12:16:26 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:32:21.327 12:16:26 -- nvme/nvme.sh@52 -- # pid0=150151 00:32:21.327 12:16:26 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:32:21.327 12:16:26 -- nvme/nvme.sh@54 -- # pid1=150152 00:32:21.327 12:16:26 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:32:21.327 12:16:26 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:24.612 Initializing NVMe Controllers 00:32:24.612 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:24.612 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:32:24.612 Initialization complete. Launching workers. 00:32:24.612 ======================================================== 00:32:24.612 Latency(us) 00:32:24.612 Device Information : IOPS MiB/s Average min max 00:32:24.612 PCIE (0000:00:06.0) NSID 1 from core 2: 14741.33 57.58 1084.72 160.81 17037.63 00:32:24.612 ======================================================== 00:32:24.612 Total : 14741.33 57.58 1084.72 160.81 17037.63 00:32:24.612 00:32:24.612 12:16:29 -- nvme/nvme.sh@56 -- # wait 150151 00:32:24.612 Initializing NVMe Controllers 00:32:24.612 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:24.612 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:32:24.612 Initialization complete. Launching workers. 00:32:24.612 ======================================================== 00:32:24.612 Latency(us) 00:32:24.612 Device Information : IOPS MiB/s Average min max 00:32:24.612 PCIE (0000:00:06.0) NSID 1 from core 1: 34451.79 134.58 464.02 152.12 1792.05 00:32:24.612 ======================================================== 00:32:24.612 Total : 34451.79 134.58 464.02 152.12 1792.05 00:32:24.612 00:32:27.169 Initializing NVMe Controllers 00:32:27.169 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:27.169 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:27.169 Initialization complete. Launching workers. 00:32:27.169 ======================================================== 00:32:27.169 Latency(us) 00:32:27.169 Device Information : IOPS MiB/s Average min max 00:32:27.169 PCIE (0000:00:06.0) NSID 1 from core 0: 43227.49 168.86 369.78 85.65 3506.73 00:32:27.169 ======================================================== 00:32:27.169 Total : 43227.49 168.86 369.78 85.65 3506.73 00:32:27.169 00:32:27.169 12:16:32 -- nvme/nvme.sh@57 -- # wait 150152 00:32:27.169 12:16:32 -- nvme/nvme.sh@61 -- # pid0=150227 00:32:27.169 12:16:32 -- nvme/nvme.sh@63 -- # pid1=150228 00:32:27.169 12:16:32 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:32:27.169 12:16:32 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:32:27.169 12:16:32 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:30.453 Initializing NVMe Controllers 00:32:30.453 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:30.453 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:32:30.453 Initialization complete. Launching workers. 00:32:30.453 ======================================================== 00:32:30.453 Latency(us) 00:32:30.453 Device Information : IOPS MiB/s Average min max 00:32:30.453 PCIE (0000:00:06.0) NSID 1 from core 1: 34980.33 136.64 456.99 116.87 1697.56 00:32:30.453 ======================================================== 00:32:30.453 Total : 34980.33 136.64 456.99 116.87 1697.56 00:32:30.453 00:32:30.453 Initializing NVMe Controllers 00:32:30.453 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:30.453 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:30.453 Initialization complete. Launching workers. 00:32:30.453 ======================================================== 00:32:30.453 Latency(us) 00:32:30.453 Device Information : IOPS MiB/s Average min max 00:32:30.453 PCIE (0000:00:06.0) NSID 1 from core 0: 35114.27 137.17 455.26 144.08 1439.92 00:32:30.453 ======================================================== 00:32:30.453 Total : 35114.27 137.17 455.26 144.08 1439.92 00:32:30.453 00:32:32.355 Initializing NVMe Controllers 00:32:32.355 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:32.355 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:32:32.355 Initialization complete. Launching workers. 00:32:32.355 ======================================================== 00:32:32.355 Latency(us) 00:32:32.355 Device Information : IOPS MiB/s Average min max 00:32:32.355 PCIE (0000:00:06.0) NSID 1 from core 2: 18132.77 70.83 881.74 152.92 28692.60 00:32:32.355 ======================================================== 00:32:32.355 Total : 18132.77 70.83 881.74 152.92 28692.60 00:32:32.355 00:32:32.355 12:16:37 -- nvme/nvme.sh@65 -- # wait 150227 00:32:32.355 12:16:37 -- nvme/nvme.sh@66 -- # wait 150228 00:32:32.355 00:32:32.355 real 0m11.027s 00:32:32.355 user 0m18.573s 00:32:32.355 sys 0m0.649s 00:32:32.355 12:16:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:32.355 ************************************ 00:32:32.355 END TEST nvme_multi_secondary 00:32:32.355 ************************************ 00:32:32.355 12:16:37 -- common/autotest_common.sh@10 -- # set +x 00:32:32.355 12:16:37 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:32:32.355 12:16:37 -- nvme/nvme.sh@102 -- # kill_stub 00:32:32.355 12:16:37 -- common/autotest_common.sh@1075 -- # [[ -e /proc/149462 ]] 00:32:32.355 12:16:37 -- common/autotest_common.sh@1076 -- # kill 149462 00:32:32.355 12:16:37 -- common/autotest_common.sh@1077 -- # wait 149462 00:32:32.613 [2024-11-29 12:16:38.082904] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 150085) is not found. Dropping the request. 00:32:32.613 [2024-11-29 12:16:38.083040] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 150085) is not found. Dropping the request. 00:32:32.613 [2024-11-29 12:16:38.083111] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 150085) is not found. Dropping the request. 00:32:32.613 [2024-11-29 12:16:38.083158] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 150085) is not found. Dropping the request. 00:32:32.872 12:16:38 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:32:32.872 12:16:38 -- common/autotest_common.sh@1083 -- # echo 2 00:32:32.872 12:16:38 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:32.872 12:16:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:32.872 12:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:32.872 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:32:32.872 ************************************ 00:32:32.872 START TEST bdev_nvme_reset_stuck_adm_cmd 00:32:32.872 ************************************ 00:32:32.872 12:16:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:32.872 * Looking for test storage... 00:32:32.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:32.872 12:16:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:32.872 12:16:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:32.872 12:16:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:32.872 12:16:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:32.872 12:16:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:32.872 12:16:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:32.872 12:16:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:32.872 12:16:38 -- scripts/common.sh@335 -- # IFS=.-: 00:32:32.872 12:16:38 -- scripts/common.sh@335 -- # read -ra ver1 00:32:32.872 12:16:38 -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.872 12:16:38 -- scripts/common.sh@336 -- # read -ra ver2 00:32:32.872 12:16:38 -- scripts/common.sh@337 -- # local 'op=<' 00:32:32.872 12:16:38 -- scripts/common.sh@339 -- # ver1_l=2 00:32:32.872 12:16:38 -- scripts/common.sh@340 -- # ver2_l=1 00:32:32.872 12:16:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:32.872 12:16:38 -- scripts/common.sh@343 -- # case "$op" in 00:32:32.872 12:16:38 -- scripts/common.sh@344 -- # : 1 00:32:32.872 12:16:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:32.872 12:16:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.872 12:16:38 -- scripts/common.sh@364 -- # decimal 1 00:32:32.872 12:16:38 -- scripts/common.sh@352 -- # local d=1 00:32:32.872 12:16:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.872 12:16:38 -- scripts/common.sh@354 -- # echo 1 00:32:32.872 12:16:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:32.872 12:16:38 -- scripts/common.sh@365 -- # decimal 2 00:32:32.872 12:16:38 -- scripts/common.sh@352 -- # local d=2 00:32:32.872 12:16:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.872 12:16:38 -- scripts/common.sh@354 -- # echo 2 00:32:32.872 12:16:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:32.872 12:16:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:32.872 12:16:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:32.872 12:16:38 -- scripts/common.sh@367 -- # return 0 00:32:32.872 12:16:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.872 12:16:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.872 --rc genhtml_branch_coverage=1 00:32:32.872 --rc genhtml_function_coverage=1 00:32:32.872 --rc genhtml_legend=1 00:32:32.872 --rc geninfo_all_blocks=1 00:32:32.872 --rc geninfo_unexecuted_blocks=1 00:32:32.872 00:32:32.872 ' 00:32:32.872 12:16:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.872 --rc genhtml_branch_coverage=1 00:32:32.872 --rc genhtml_function_coverage=1 00:32:32.872 --rc genhtml_legend=1 00:32:32.872 --rc geninfo_all_blocks=1 00:32:32.872 --rc geninfo_unexecuted_blocks=1 00:32:32.872 00:32:32.872 ' 00:32:32.872 12:16:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.872 --rc genhtml_branch_coverage=1 00:32:32.872 --rc genhtml_function_coverage=1 00:32:32.872 --rc genhtml_legend=1 00:32:32.872 --rc geninfo_all_blocks=1 00:32:32.872 --rc geninfo_unexecuted_blocks=1 00:32:32.872 00:32:32.872 ' 00:32:32.872 12:16:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.872 --rc genhtml_branch_coverage=1 00:32:32.872 --rc genhtml_function_coverage=1 00:32:32.872 --rc genhtml_legend=1 00:32:32.872 --rc geninfo_all_blocks=1 00:32:32.872 --rc geninfo_unexecuted_blocks=1 00:32:32.872 00:32:32.872 ' 00:32:32.872 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:32:32.872 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:32:32.872 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:32:32.872 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:32:32.872 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:32:32.872 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:32:32.872 12:16:38 -- common/autotest_common.sh@1519 -- # bdfs=() 00:32:32.872 12:16:38 -- common/autotest_common.sh@1519 -- # local bdfs 00:32:32.872 12:16:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:32:32.872 12:16:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:32:32.872 12:16:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:32:32.872 12:16:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:32:32.872 12:16:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:32.872 12:16:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:32.872 12:16:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:32:33.129 12:16:38 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:32:33.129 12:16:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:32:33.129 12:16:38 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:32:33.129 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:32:33.129 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:32:33.129 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=150395 00:32:33.129 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:32:33.129 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:33.129 12:16:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 150395 00:32:33.129 12:16:38 -- common/autotest_common.sh@829 -- # '[' -z 150395 ']' 00:32:33.129 12:16:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.129 12:16:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.129 12:16:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.129 12:16:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.129 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:32:33.129 [2024-11-29 12:16:38.494117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:33.129 [2024-11-29 12:16:38.494922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150395 ] 00:32:33.388 [2024-11-29 12:16:38.685469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:33.388 [2024-11-29 12:16:38.789538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:33.388 [2024-11-29 12:16:38.789987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.388 [2024-11-29 12:16:38.790148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.388 [2024-11-29 12:16:38.790616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.388 [2024-11-29 12:16:38.790623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.321 12:16:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.321 12:16:39 -- common/autotest_common.sh@862 -- # return 0 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:32:34.321 12:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.321 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:32:34.321 nvme0n1 00:32:34.321 12:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_lfFKg.txt 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:32:34.321 12:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.321 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:32:34.321 true 00:32:34.321 12:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732882599 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=150421 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:32:34.321 12:16:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:36.221 12:16:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.221 12:16:41 -- common/autotest_common.sh@10 -- # set +x 00:32:36.221 [2024-11-29 12:16:41.557150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:36.221 [2024-11-29 12:16:41.557529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.221 [2024-11-29 12:16:41.557623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:32:36.221 [2024-11-29 12:16:41.557677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.221 [2024-11-29 12:16:41.559407] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.221 12:16:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.221 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 150421 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 150421 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 150421 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.221 12:16:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.221 12:16:41 -- common/autotest_common.sh@10 -- # set +x 00:32:36.221 12:16:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_lfFKg.txt 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_lfFKg.txt 00:32:36.221 12:16:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 150395 00:32:36.221 12:16:41 -- common/autotest_common.sh@936 -- # '[' -z 150395 ']' 00:32:36.221 12:16:41 -- common/autotest_common.sh@940 -- # kill -0 150395 00:32:36.221 12:16:41 -- common/autotest_common.sh@941 -- # uname 00:32:36.221 12:16:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:36.221 12:16:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150395 00:32:36.221 12:16:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:36.221 killing process with pid 150395 00:32:36.221 12:16:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:36.222 12:16:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150395' 00:32:36.222 12:16:41 -- common/autotest_common.sh@955 -- # kill 150395 00:32:36.222 12:16:41 -- common/autotest_common.sh@960 -- # wait 150395 00:32:36.787 12:16:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:32:36.787 12:16:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:32:36.787 00:32:36.787 real 0m3.928s 00:32:36.787 user 0m13.851s 00:32:36.787 sys 0m0.538s 00:32:36.787 12:16:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:36.787 ************************************ 00:32:36.787 12:16:42 -- common/autotest_common.sh@10 -- # set +x 00:32:36.787 END TEST bdev_nvme_reset_stuck_adm_cmd 00:32:36.787 ************************************ 00:32:36.787 12:16:42 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:32:36.787 12:16:42 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:32:36.787 12:16:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:36.787 12:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:36.787 12:16:42 -- common/autotest_common.sh@10 -- # set +x 00:32:36.787 ************************************ 00:32:36.787 START TEST nvme_fio 00:32:36.787 ************************************ 00:32:36.787 12:16:42 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:32:36.787 12:16:42 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:32:36.787 12:16:42 -- nvme/nvme.sh@32 -- # ran_fio=false 00:32:36.787 12:16:42 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:32:36.787 12:16:42 -- common/autotest_common.sh@1508 -- # bdfs=() 00:32:36.787 12:16:42 -- common/autotest_common.sh@1508 -- # local bdfs 00:32:36.787 12:16:42 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:36.787 12:16:42 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:36.787 12:16:42 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:32:36.787 12:16:42 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:32:36.787 12:16:42 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:32:36.787 12:16:42 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:32:36.787 12:16:42 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:32:36.787 12:16:42 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:36.787 12:16:42 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:36.787 12:16:42 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:37.045 12:16:42 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:37.045 12:16:42 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:37.303 12:16:42 -- nvme/nvme.sh@41 -- # bs=4096 00:32:37.304 12:16:42 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:32:37.304 12:16:42 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:32:37.304 12:16:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:32:37.304 12:16:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:37.304 12:16:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:32:37.304 12:16:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:37.304 12:16:42 -- common/autotest_common.sh@1330 -- # shift 00:32:37.304 12:16:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:32:37.304 12:16:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.304 12:16:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:37.304 12:16:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:37.304 12:16:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:32:37.304 12:16:42 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:32:37.304 12:16:42 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:32:37.304 12:16:42 -- common/autotest_common.sh@1336 -- # break 00:32:37.304 12:16:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:37.304 12:16:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:32:37.562 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:37.562 fio-3.35 00:32:37.562 Starting 1 thread 00:32:40.845 00:32:40.845 test: (groupid=0, jobs=1): err= 0: pid=150550: Fri Nov 29 12:16:46 2024 00:32:40.845 read: IOPS=18.1k, BW=70.8MiB/s (74.2MB/s)(142MiB/2001msec) 00:32:40.845 slat (usec): min=4, max=103, avg= 5.45, stdev= 1.16 00:32:40.845 clat (usec): min=252, max=7426, avg=3511.44, stdev=386.33 00:32:40.845 lat (usec): min=257, max=7467, avg=3516.88, stdev=386.83 00:32:40.845 clat percentiles (usec): 00:32:40.845 | 1.00th=[ 3032], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3294], 00:32:40.845 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:32:40.845 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 4228], 95.00th=[ 4293], 00:32:40.845 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 7046], 99.95th=[ 7242], 00:32:40.845 | 99.99th=[ 7373] 00:32:40.845 bw ( KiB/s): min=73312, max=75776, per=100.00%, avg=74720.00, stdev=1269.15, samples=3 00:32:40.845 iops : min=18328, max=18944, avg=18680.00, stdev=317.29, samples=3 00:32:40.845 write: IOPS=18.1k, BW=70.9MiB/s (74.3MB/s)(142MiB/2001msec); 0 zone resets 00:32:40.845 slat (nsec): min=4484, max=45690, avg=5583.65, stdev=1028.13 00:32:40.845 clat (usec): min=243, max=7582, avg=3526.45, stdev=403.40 00:32:40.845 lat (usec): min=248, max=7588, avg=3532.03, stdev=403.90 00:32:40.845 clat percentiles (usec): 00:32:40.845 | 1.00th=[ 3064], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3326], 00:32:40.845 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:32:40.845 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 4228], 95.00th=[ 4293], 00:32:40.845 | 99.00th=[ 4490], 99.50th=[ 5211], 99.90th=[ 7242], 99.95th=[ 7308], 00:32:40.845 | 99.99th=[ 7439] 00:32:40.845 bw ( KiB/s): min=73464, max=75712, per=100.00%, avg=74776.00, stdev=1170.22, samples=3 00:32:40.845 iops : min=18366, max=18928, avg=18694.00, stdev=292.55, samples=3 00:32:40.845 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:40.845 lat (msec) : 2=0.09%, 4=87.35%, 10=12.51% 00:32:40.845 cpu : usr=99.95%, sys=0.00%, ctx=5, majf=0, minf=39 00:32:40.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:40.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:40.845 issued rwts: total=36249,36296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:40.845 00:32:40.845 Run status group 0 (all jobs): 00:32:40.845 READ: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=142MiB (148MB), run=2001-2001msec 00:32:40.845 WRITE: bw=70.9MiB/s (74.3MB/s), 70.9MiB/s-70.9MiB/s (74.3MB/s-74.3MB/s), io=142MiB (149MB), run=2001-2001msec 00:32:41.104 ----------------------------------------------------- 00:32:41.104 Suppressions used: 00:32:41.104 count bytes template 00:32:41.104 1 32 /usr/src/fio/parse.c 00:32:41.104 ----------------------------------------------------- 00:32:41.104 00:32:41.104 12:16:46 -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:41.104 12:16:46 -- nvme/nvme.sh@46 -- # true 00:32:41.104 00:32:41.104 real 0m4.189s 00:32:41.104 user 0m3.493s 00:32:41.104 sys 0m0.380s 00:32:41.104 ************************************ 00:32:41.104 END TEST nvme_fio 00:32:41.104 ************************************ 00:32:41.104 12:16:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:41.104 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:32:41.104 00:32:41.104 real 0m44.961s 00:32:41.104 user 1m58.238s 00:32:41.104 sys 0m7.601s 00:32:41.104 12:16:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:41.104 ************************************ 00:32:41.104 END TEST nvme 00:32:41.104 ************************************ 00:32:41.104 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:32:41.104 12:16:46 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:32:41.104 12:16:46 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:41.104 12:16:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:41.104 12:16:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:41.104 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:32:41.104 ************************************ 00:32:41.104 START TEST nvme_scc 00:32:41.104 ************************************ 00:32:41.104 12:16:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:41.104 * Looking for test storage... 00:32:41.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:41.104 12:16:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:41.104 12:16:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:41.104 12:16:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:41.364 12:16:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:41.364 12:16:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:41.364 12:16:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:41.364 12:16:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:41.364 12:16:46 -- scripts/common.sh@335 -- # IFS=.-: 00:32:41.364 12:16:46 -- scripts/common.sh@335 -- # read -ra ver1 00:32:41.364 12:16:46 -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.364 12:16:46 -- scripts/common.sh@336 -- # read -ra ver2 00:32:41.364 12:16:46 -- scripts/common.sh@337 -- # local 'op=<' 00:32:41.364 12:16:46 -- scripts/common.sh@339 -- # ver1_l=2 00:32:41.364 12:16:46 -- scripts/common.sh@340 -- # ver2_l=1 00:32:41.364 12:16:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:41.364 12:16:46 -- scripts/common.sh@343 -- # case "$op" in 00:32:41.364 12:16:46 -- scripts/common.sh@344 -- # : 1 00:32:41.364 12:16:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:41.364 12:16:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.364 12:16:46 -- scripts/common.sh@364 -- # decimal 1 00:32:41.364 12:16:46 -- scripts/common.sh@352 -- # local d=1 00:32:41.364 12:16:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.364 12:16:46 -- scripts/common.sh@354 -- # echo 1 00:32:41.364 12:16:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:41.364 12:16:46 -- scripts/common.sh@365 -- # decimal 2 00:32:41.364 12:16:46 -- scripts/common.sh@352 -- # local d=2 00:32:41.364 12:16:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.364 12:16:46 -- scripts/common.sh@354 -- # echo 2 00:32:41.364 12:16:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:41.364 12:16:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:41.364 12:16:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:41.364 12:16:46 -- scripts/common.sh@367 -- # return 0 00:32:41.364 12:16:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.364 12:16:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.364 --rc genhtml_branch_coverage=1 00:32:41.364 --rc genhtml_function_coverage=1 00:32:41.364 --rc genhtml_legend=1 00:32:41.364 --rc geninfo_all_blocks=1 00:32:41.364 --rc geninfo_unexecuted_blocks=1 00:32:41.364 00:32:41.364 ' 00:32:41.364 12:16:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.364 --rc genhtml_branch_coverage=1 00:32:41.364 --rc genhtml_function_coverage=1 00:32:41.364 --rc genhtml_legend=1 00:32:41.364 --rc geninfo_all_blocks=1 00:32:41.364 --rc geninfo_unexecuted_blocks=1 00:32:41.364 00:32:41.364 ' 00:32:41.364 12:16:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.364 --rc genhtml_branch_coverage=1 00:32:41.364 --rc genhtml_function_coverage=1 00:32:41.364 --rc genhtml_legend=1 00:32:41.364 --rc geninfo_all_blocks=1 00:32:41.364 --rc geninfo_unexecuted_blocks=1 00:32:41.364 00:32:41.364 ' 00:32:41.364 12:16:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.364 --rc genhtml_branch_coverage=1 00:32:41.364 --rc genhtml_function_coverage=1 00:32:41.364 --rc genhtml_legend=1 00:32:41.364 --rc geninfo_all_blocks=1 00:32:41.364 --rc geninfo_unexecuted_blocks=1 00:32:41.364 00:32:41.364 ' 00:32:41.364 12:16:46 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:41.364 12:16:46 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:41.364 12:16:46 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:41.364 12:16:46 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:41.364 12:16:46 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:41.364 12:16:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.364 12:16:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.364 12:16:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.364 12:16:46 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:41.364 12:16:46 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:41.364 12:16:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:41.364 12:16:46 -- paths/export.sh@5 -- # export PATH 00:32:41.364 12:16:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:41.364 12:16:46 -- nvme/functions.sh@10 -- # ctrls=() 00:32:41.364 12:16:46 -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:41.364 12:16:46 -- nvme/functions.sh@11 -- # nvmes=() 00:32:41.364 12:16:46 -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:41.364 12:16:46 -- nvme/functions.sh@12 -- # bdfs=() 00:32:41.364 12:16:46 -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:41.364 12:16:46 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:41.364 12:16:46 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:41.364 12:16:46 -- nvme/functions.sh@14 -- # nvme_name= 00:32:41.364 12:16:46 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:41.364 12:16:46 -- nvme/nvme_scc.sh@12 -- # uname 00:32:41.364 12:16:46 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:41.364 12:16:46 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:41.364 12:16:46 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:41.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:41.623 Waiting for block devices as requested 00:32:41.623 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:41.623 12:16:47 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:41.623 12:16:47 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:41.623 12:16:47 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:41.623 12:16:47 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:41.623 12:16:47 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:32:41.624 12:16:47 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:32:41.624 12:16:47 -- scripts/common.sh@15 -- # local i 00:32:41.624 12:16:47 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:32:41.624 12:16:47 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:32:41.624 12:16:47 -- scripts/common.sh@24 -- # return 0 00:32:41.624 12:16:47 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:41.624 12:16:47 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:41.624 12:16:47 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@18 -- # shift 00:32:41.624 12:16:47 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.624 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.624 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.624 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.625 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.625 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:41.625 12:16:47 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.885 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:41.885 12:16:47 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.885 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.886 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.886 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.886 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:41.887 12:16:47 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:41.887 12:16:47 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:41.887 12:16:47 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:41.887 12:16:47 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@18 -- # shift 00:32:41.887 12:16:47 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.887 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:41.887 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.887 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.888 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:41.888 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.888 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:41.889 12:16:47 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # IFS=: 00:32:41.889 12:16:47 -- nvme/functions.sh@21 -- # read -r reg val 00:32:41.889 12:16:47 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:41.889 12:16:47 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:41.889 12:16:47 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:32:41.889 12:16:47 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:32:41.889 12:16:47 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:41.889 12:16:47 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:32:41.889 12:16:47 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:41.889 12:16:47 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:32:41.889 12:16:47 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:32:41.889 12:16:47 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:32:41.889 12:16:47 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:32:41.889 12:16:47 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:32:41.889 12:16:47 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:32:41.889 12:16:47 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:41.889 12:16:47 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:41.889 12:16:47 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:41.889 12:16:47 -- nvme/functions.sh@76 -- # echo 0x15d 00:32:41.889 12:16:47 -- nvme/functions.sh@184 -- # oncs=0x15d 00:32:41.889 12:16:47 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:32:41.889 12:16:47 -- nvme/functions.sh@197 -- # echo nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:32:41.889 12:16:47 -- nvme/functions.sh@206 -- # echo nvme0 00:32:41.889 12:16:47 -- nvme/functions.sh@207 -- # return 0 00:32:41.889 12:16:47 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:32:41.889 12:16:47 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:32:41.889 12:16:47 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:42.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:42.407 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:32:43.343 12:16:48 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:32:43.343 12:16:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:32:43.343 12:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:43.343 12:16:48 -- common/autotest_common.sh@10 -- # set +x 00:32:43.343 ************************************ 00:32:43.343 START TEST nvme_simple_copy 00:32:43.343 ************************************ 00:32:43.343 12:16:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:32:43.602 Initializing NVMe Controllers 00:32:43.602 Attaching to 0000:00:06.0 00:32:43.602 Controller supports SCC. Attached to 0000:00:06.0 00:32:43.602 Namespace ID: 1 size: 5GB 00:32:43.602 Initialization complete. 00:32:43.602 00:32:43.602 Controller QEMU NVMe Ctrl (12340 ) 00:32:43.602 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:43.602 Namespace Block Size:4096 00:32:43.602 Writing LBAs 0 to 63 with Random Data 00:32:43.602 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:43.602 LBAs matching Written Data: 64 00:32:43.602 00:32:43.602 real 0m0.267s 00:32:43.602 user 0m0.103s 00:32:43.602 sys 0m0.066s 00:32:43.602 12:16:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:43.602 12:16:49 -- common/autotest_common.sh@10 -- # set +x 00:32:43.602 ************************************ 00:32:43.602 END TEST nvme_simple_copy 00:32:43.602 ************************************ 00:32:43.861 00:32:43.861 real 0m2.673s 00:32:43.861 user 0m0.798s 00:32:43.861 sys 0m1.786s 00:32:43.861 12:16:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:43.861 ************************************ 00:32:43.861 END TEST nvme_scc 00:32:43.861 ************************************ 00:32:43.861 12:16:49 -- common/autotest_common.sh@10 -- # set +x 00:32:43.861 12:16:49 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:32:43.861 12:16:49 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:32:43.861 12:16:49 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:32:43.861 12:16:49 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:32:43.861 12:16:49 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:32:43.861 12:16:49 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:43.861 12:16:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:43.861 12:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:43.862 12:16:49 -- common/autotest_common.sh@10 -- # set +x 00:32:43.862 ************************************ 00:32:43.862 START TEST nvme_rpc 00:32:43.862 ************************************ 00:32:43.862 12:16:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:43.862 * Looking for test storage... 00:32:43.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:43.862 12:16:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:43.862 12:16:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:43.862 12:16:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:43.862 12:16:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:43.862 12:16:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:43.862 12:16:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:43.862 12:16:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:43.862 12:16:49 -- scripts/common.sh@335 -- # IFS=.-: 00:32:43.862 12:16:49 -- scripts/common.sh@335 -- # read -ra ver1 00:32:43.862 12:16:49 -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.862 12:16:49 -- scripts/common.sh@336 -- # read -ra ver2 00:32:43.862 12:16:49 -- scripts/common.sh@337 -- # local 'op=<' 00:32:43.862 12:16:49 -- scripts/common.sh@339 -- # ver1_l=2 00:32:43.862 12:16:49 -- scripts/common.sh@340 -- # ver2_l=1 00:32:43.862 12:16:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:43.862 12:16:49 -- scripts/common.sh@343 -- # case "$op" in 00:32:43.862 12:16:49 -- scripts/common.sh@344 -- # : 1 00:32:43.862 12:16:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:43.862 12:16:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.862 12:16:49 -- scripts/common.sh@364 -- # decimal 1 00:32:43.862 12:16:49 -- scripts/common.sh@352 -- # local d=1 00:32:43.862 12:16:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.862 12:16:49 -- scripts/common.sh@354 -- # echo 1 00:32:43.862 12:16:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:43.862 12:16:49 -- scripts/common.sh@365 -- # decimal 2 00:32:43.862 12:16:49 -- scripts/common.sh@352 -- # local d=2 00:32:43.862 12:16:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.862 12:16:49 -- scripts/common.sh@354 -- # echo 2 00:32:43.862 12:16:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:43.862 12:16:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:43.862 12:16:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:43.862 12:16:49 -- scripts/common.sh@367 -- # return 0 00:32:43.862 12:16:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.862 12:16:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.862 --rc genhtml_branch_coverage=1 00:32:43.862 --rc genhtml_function_coverage=1 00:32:43.862 --rc genhtml_legend=1 00:32:43.862 --rc geninfo_all_blocks=1 00:32:43.862 --rc geninfo_unexecuted_blocks=1 00:32:43.862 00:32:43.862 ' 00:32:43.862 12:16:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.862 --rc genhtml_branch_coverage=1 00:32:43.862 --rc genhtml_function_coverage=1 00:32:43.862 --rc genhtml_legend=1 00:32:43.862 --rc geninfo_all_blocks=1 00:32:43.862 --rc geninfo_unexecuted_blocks=1 00:32:43.862 00:32:43.862 ' 00:32:43.862 12:16:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.862 --rc genhtml_branch_coverage=1 00:32:43.862 --rc genhtml_function_coverage=1 00:32:43.862 --rc genhtml_legend=1 00:32:43.862 --rc geninfo_all_blocks=1 00:32:43.862 --rc geninfo_unexecuted_blocks=1 00:32:43.862 00:32:43.862 ' 00:32:43.862 12:16:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.862 --rc genhtml_branch_coverage=1 00:32:43.862 --rc genhtml_function_coverage=1 00:32:43.862 --rc genhtml_legend=1 00:32:43.862 --rc geninfo_all_blocks=1 00:32:43.862 --rc geninfo_unexecuted_blocks=1 00:32:43.862 00:32:43.862 ' 00:32:43.862 12:16:49 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:43.862 12:16:49 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:43.862 12:16:49 -- common/autotest_common.sh@1519 -- # bdfs=() 00:32:43.862 12:16:49 -- common/autotest_common.sh@1519 -- # local bdfs 00:32:43.862 12:16:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:32:43.862 12:16:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:32:43.862 12:16:49 -- common/autotest_common.sh@1508 -- # bdfs=() 00:32:43.862 12:16:49 -- common/autotest_common.sh@1508 -- # local bdfs 00:32:43.862 12:16:49 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:44.122 12:16:49 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:44.122 12:16:49 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:32:44.122 12:16:49 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:32:44.122 12:16:49 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:32:44.122 12:16:49 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:32:44.122 12:16:49 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:32:44.122 12:16:49 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=151052 00:32:44.122 12:16:49 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:44.122 12:16:49 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 151052 00:32:44.122 12:16:49 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:44.122 12:16:49 -- common/autotest_common.sh@829 -- # '[' -z 151052 ']' 00:32:44.122 12:16:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.122 12:16:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.122 12:16:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.122 12:16:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.122 12:16:49 -- common/autotest_common.sh@10 -- # set +x 00:32:44.122 [2024-11-29 12:16:49.480693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:44.122 [2024-11-29 12:16:49.480964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151052 ] 00:32:44.380 [2024-11-29 12:16:49.640371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:44.380 [2024-11-29 12:16:49.737692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:44.380 [2024-11-29 12:16:49.738102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.380 [2024-11-29 12:16:49.738112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.015 12:16:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:45.015 12:16:50 -- common/autotest_common.sh@862 -- # return 0 00:32:45.015 12:16:50 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:32:45.311 Nvme0n1 00:32:45.311 12:16:50 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:45.311 12:16:50 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:45.569 request: 00:32:45.569 { 00:32:45.569 "filename": "non_existing_file", 00:32:45.569 "bdev_name": "Nvme0n1", 00:32:45.569 "method": "bdev_nvme_apply_firmware", 00:32:45.569 "req_id": 1 00:32:45.569 } 00:32:45.569 Got JSON-RPC error response 00:32:45.569 response: 00:32:45.569 { 00:32:45.569 "code": -32603, 00:32:45.569 "message": "open file failed." 00:32:45.569 } 00:32:45.569 12:16:51 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:45.569 12:16:51 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:45.569 12:16:51 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:45.826 12:16:51 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:45.826 12:16:51 -- nvme/nvme_rpc.sh@40 -- # killprocess 151052 00:32:45.826 12:16:51 -- common/autotest_common.sh@936 -- # '[' -z 151052 ']' 00:32:45.826 12:16:51 -- common/autotest_common.sh@940 -- # kill -0 151052 00:32:45.826 12:16:51 -- common/autotest_common.sh@941 -- # uname 00:32:45.826 12:16:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:45.826 12:16:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151052 00:32:45.826 12:16:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:45.826 killing process with pid 151052 00:32:45.826 12:16:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:45.826 12:16:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151052' 00:32:45.826 12:16:51 -- common/autotest_common.sh@955 -- # kill 151052 00:32:45.826 12:16:51 -- common/autotest_common.sh@960 -- # wait 151052 00:32:46.391 00:32:46.391 real 0m2.577s 00:32:46.391 user 0m5.090s 00:32:46.391 sys 0m0.642s 00:32:46.391 12:16:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:46.391 12:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:46.391 ************************************ 00:32:46.391 END TEST nvme_rpc 00:32:46.391 ************************************ 00:32:46.391 12:16:51 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:46.391 12:16:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:46.391 12:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:46.391 12:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:46.391 ************************************ 00:32:46.391 START TEST nvme_rpc_timeouts 00:32:46.391 ************************************ 00:32:46.391 12:16:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:46.391 * Looking for test storage... 00:32:46.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:46.391 12:16:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:46.391 12:16:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:46.391 12:16:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:46.648 12:16:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:46.648 12:16:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:46.648 12:16:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:46.648 12:16:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:46.648 12:16:51 -- scripts/common.sh@335 -- # IFS=.-: 00:32:46.648 12:16:51 -- scripts/common.sh@335 -- # read -ra ver1 00:32:46.648 12:16:51 -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.648 12:16:51 -- scripts/common.sh@336 -- # read -ra ver2 00:32:46.648 12:16:51 -- scripts/common.sh@337 -- # local 'op=<' 00:32:46.648 12:16:51 -- scripts/common.sh@339 -- # ver1_l=2 00:32:46.648 12:16:51 -- scripts/common.sh@340 -- # ver2_l=1 00:32:46.648 12:16:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:46.648 12:16:51 -- scripts/common.sh@343 -- # case "$op" in 00:32:46.648 12:16:51 -- scripts/common.sh@344 -- # : 1 00:32:46.648 12:16:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:46.648 12:16:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.648 12:16:51 -- scripts/common.sh@364 -- # decimal 1 00:32:46.648 12:16:51 -- scripts/common.sh@352 -- # local d=1 00:32:46.648 12:16:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.648 12:16:51 -- scripts/common.sh@354 -- # echo 1 00:32:46.648 12:16:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:46.648 12:16:51 -- scripts/common.sh@365 -- # decimal 2 00:32:46.648 12:16:51 -- scripts/common.sh@352 -- # local d=2 00:32:46.648 12:16:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.648 12:16:51 -- scripts/common.sh@354 -- # echo 2 00:32:46.648 12:16:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:46.649 12:16:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:46.649 12:16:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:46.649 12:16:51 -- scripts/common.sh@367 -- # return 0 00:32:46.649 12:16:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.649 12:16:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:46.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.649 --rc genhtml_branch_coverage=1 00:32:46.649 --rc genhtml_function_coverage=1 00:32:46.649 --rc genhtml_legend=1 00:32:46.649 --rc geninfo_all_blocks=1 00:32:46.649 --rc geninfo_unexecuted_blocks=1 00:32:46.649 00:32:46.649 ' 00:32:46.649 12:16:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:46.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.649 --rc genhtml_branch_coverage=1 00:32:46.649 --rc genhtml_function_coverage=1 00:32:46.649 --rc genhtml_legend=1 00:32:46.649 --rc geninfo_all_blocks=1 00:32:46.649 --rc geninfo_unexecuted_blocks=1 00:32:46.649 00:32:46.649 ' 00:32:46.649 12:16:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:46.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.649 --rc genhtml_branch_coverage=1 00:32:46.649 --rc genhtml_function_coverage=1 00:32:46.649 --rc genhtml_legend=1 00:32:46.649 --rc geninfo_all_blocks=1 00:32:46.649 --rc geninfo_unexecuted_blocks=1 00:32:46.649 00:32:46.649 ' 00:32:46.649 12:16:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:46.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.649 --rc genhtml_branch_coverage=1 00:32:46.649 --rc genhtml_function_coverage=1 00:32:46.649 --rc genhtml_legend=1 00:32:46.649 --rc geninfo_all_blocks=1 00:32:46.649 --rc geninfo_unexecuted_blocks=1 00:32:46.649 00:32:46.649 ' 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_151114 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_151114 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=151147 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:46.649 12:16:51 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 151147 00:32:46.649 12:16:51 -- common/autotest_common.sh@829 -- # '[' -z 151147 ']' 00:32:46.649 12:16:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.649 12:16:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.649 12:16:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.649 12:16:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.649 12:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:46.649 [2024-11-29 12:16:52.059539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:46.649 [2024-11-29 12:16:52.059814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151147 ] 00:32:46.960 [2024-11-29 12:16:52.209086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:46.960 [2024-11-29 12:16:52.300663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:46.960 [2024-11-29 12:16:52.301078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.960 [2024-11-29 12:16:52.301088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.524 12:16:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:47.524 12:16:53 -- common/autotest_common.sh@862 -- # return 0 00:32:47.524 Checking default timeout settings: 00:32:47.524 12:16:53 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:47.524 12:16:53 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:48.089 Making settings changes with rpc: 00:32:48.089 12:16:53 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:48.089 12:16:53 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:48.346 Check default vs. modified settings: 00:32:48.346 12:16:53 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:48.346 12:16:53 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_151114 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_151114 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:48.604 Setting action_on_timeout is changed as expected. 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_151114 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_151114 00:32:48.604 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:48.605 Setting timeout_us is changed as expected. 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_151114 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_151114 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:48.605 Setting timeout_admin_us is changed as expected. 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_151114 /tmp/settings_modified_151114 00:32:48.605 12:16:53 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 151147 00:32:48.605 12:16:53 -- common/autotest_common.sh@936 -- # '[' -z 151147 ']' 00:32:48.605 12:16:53 -- common/autotest_common.sh@940 -- # kill -0 151147 00:32:48.605 12:16:53 -- common/autotest_common.sh@941 -- # uname 00:32:48.605 12:16:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:48.605 12:16:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151147 00:32:48.605 12:16:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:48.605 killing process with pid 151147 00:32:48.605 12:16:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:48.605 12:16:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151147' 00:32:48.605 12:16:53 -- common/autotest_common.sh@955 -- # kill 151147 00:32:48.605 12:16:53 -- common/autotest_common.sh@960 -- # wait 151147 00:32:49.172 RPC TIMEOUT SETTING TEST PASSED. 00:32:49.172 12:16:54 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:49.172 00:32:49.172 real 0m2.621s 00:32:49.172 user 0m5.260s 00:32:49.172 sys 0m0.573s 00:32:49.172 12:16:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:49.172 ************************************ 00:32:49.172 END TEST nvme_rpc_timeouts 00:32:49.172 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:49.172 ************************************ 00:32:49.172 12:16:54 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:32:49.172 12:16:54 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@255 -- # timing_exit lib 00:32:49.172 12:16:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:49.172 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:49.172 12:16:54 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:49.172 12:16:54 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:32:49.172 12:16:54 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:32:49.172 12:16:54 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:32:49.172 12:16:54 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:32:49.172 12:16:54 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:32:49.172 12:16:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:49.172 12:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:49.172 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:49.172 ************************************ 00:32:49.172 START TEST blockdev_raid5f 00:32:49.172 ************************************ 00:32:49.172 12:16:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:32:49.172 * Looking for test storage... 00:32:49.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:32:49.172 12:16:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:49.172 12:16:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:49.172 12:16:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:49.172 12:16:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:49.172 12:16:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:49.172 12:16:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:49.172 12:16:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:49.172 12:16:54 -- scripts/common.sh@335 -- # IFS=.-: 00:32:49.172 12:16:54 -- scripts/common.sh@335 -- # read -ra ver1 00:32:49.172 12:16:54 -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.172 12:16:54 -- scripts/common.sh@336 -- # read -ra ver2 00:32:49.172 12:16:54 -- scripts/common.sh@337 -- # local 'op=<' 00:32:49.172 12:16:54 -- scripts/common.sh@339 -- # ver1_l=2 00:32:49.172 12:16:54 -- scripts/common.sh@340 -- # ver2_l=1 00:32:49.172 12:16:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:49.172 12:16:54 -- scripts/common.sh@343 -- # case "$op" in 00:32:49.172 12:16:54 -- scripts/common.sh@344 -- # : 1 00:32:49.172 12:16:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:49.172 12:16:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.172 12:16:54 -- scripts/common.sh@364 -- # decimal 1 00:32:49.172 12:16:54 -- scripts/common.sh@352 -- # local d=1 00:32:49.172 12:16:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.172 12:16:54 -- scripts/common.sh@354 -- # echo 1 00:32:49.172 12:16:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:49.172 12:16:54 -- scripts/common.sh@365 -- # decimal 2 00:32:49.172 12:16:54 -- scripts/common.sh@352 -- # local d=2 00:32:49.172 12:16:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.172 12:16:54 -- scripts/common.sh@354 -- # echo 2 00:32:49.172 12:16:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:49.172 12:16:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:49.172 12:16:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:49.172 12:16:54 -- scripts/common.sh@367 -- # return 0 00:32:49.172 12:16:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.172 12:16:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:49.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.172 --rc genhtml_branch_coverage=1 00:32:49.172 --rc genhtml_function_coverage=1 00:32:49.172 --rc genhtml_legend=1 00:32:49.172 --rc geninfo_all_blocks=1 00:32:49.172 --rc geninfo_unexecuted_blocks=1 00:32:49.172 00:32:49.172 ' 00:32:49.172 12:16:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:49.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.172 --rc genhtml_branch_coverage=1 00:32:49.172 --rc genhtml_function_coverage=1 00:32:49.172 --rc genhtml_legend=1 00:32:49.172 --rc geninfo_all_blocks=1 00:32:49.173 --rc geninfo_unexecuted_blocks=1 00:32:49.173 00:32:49.173 ' 00:32:49.173 12:16:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:49.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.173 --rc genhtml_branch_coverage=1 00:32:49.173 --rc genhtml_function_coverage=1 00:32:49.173 --rc genhtml_legend=1 00:32:49.173 --rc geninfo_all_blocks=1 00:32:49.173 --rc geninfo_unexecuted_blocks=1 00:32:49.173 00:32:49.173 ' 00:32:49.173 12:16:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:49.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.173 --rc genhtml_branch_coverage=1 00:32:49.173 --rc genhtml_function_coverage=1 00:32:49.173 --rc genhtml_legend=1 00:32:49.173 --rc geninfo_all_blocks=1 00:32:49.173 --rc geninfo_unexecuted_blocks=1 00:32:49.173 00:32:49.173 ' 00:32:49.173 12:16:54 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:32:49.173 12:16:54 -- bdev/nbd_common.sh@6 -- # set -e 00:32:49.432 12:16:54 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:49.432 12:16:54 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:49.432 12:16:54 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:32:49.432 12:16:54 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:32:49.432 12:16:54 -- bdev/blockdev.sh@18 -- # : 00:32:49.432 12:16:54 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:32:49.432 12:16:54 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:32:49.432 12:16:54 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:32:49.432 12:16:54 -- bdev/blockdev.sh@672 -- # uname -s 00:32:49.432 12:16:54 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:32:49.432 12:16:54 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:32:49.432 12:16:54 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:32:49.432 12:16:54 -- bdev/blockdev.sh@681 -- # crypto_device= 00:32:49.432 12:16:54 -- bdev/blockdev.sh@682 -- # dek= 00:32:49.432 12:16:54 -- bdev/blockdev.sh@683 -- # env_ctx= 00:32:49.432 12:16:54 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:32:49.432 12:16:54 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:32:49.432 12:16:54 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:32:49.432 12:16:54 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:32:49.432 12:16:54 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:32:49.432 12:16:54 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=151288 00:32:49.432 12:16:54 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:49.432 12:16:54 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:49.432 12:16:54 -- bdev/blockdev.sh@47 -- # waitforlisten 151288 00:32:49.432 12:16:54 -- common/autotest_common.sh@829 -- # '[' -z 151288 ']' 00:32:49.432 12:16:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.432 12:16:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.432 12:16:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.432 12:16:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.432 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:49.432 [2024-11-29 12:16:54.751046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:49.432 [2024-11-29 12:16:54.751258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151288 ] 00:32:49.432 [2024-11-29 12:16:54.889137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.691 [2024-11-29 12:16:54.983878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:49.691 [2024-11-29 12:16:54.984126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.258 12:16:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:50.258 12:16:55 -- common/autotest_common.sh@862 -- # return 0 00:32:50.258 12:16:55 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:32:50.258 12:16:55 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:32:50.258 12:16:55 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:32:50.258 12:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.258 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:50.258 Malloc0 00:32:50.258 Malloc1 00:32:50.258 Malloc2 00:32:50.258 12:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.258 12:16:55 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:32:50.258 12:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.258 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:50.258 12:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.258 12:16:55 -- bdev/blockdev.sh@738 -- # cat 00:32:50.258 12:16:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:32:50.258 12:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.258 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:50.258 12:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.258 12:16:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:32:50.258 12:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.258 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:50.517 12:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.517 12:16:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:50.517 12:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.517 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:50.517 12:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.517 12:16:55 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:32:50.517 12:16:55 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:32:50.517 12:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.517 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:50.517 12:16:55 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:32:50.517 12:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.517 12:16:55 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:32:50.517 12:16:55 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "24d5822e-7daa-47ee-951c-9aa84af5bda5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "24d5822e-7daa-47ee-951c-9aa84af5bda5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "24d5822e-7daa-47ee-951c-9aa84af5bda5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "11ccf217-924e-427f-adec-2608c8bd0d35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3a91659b-2fa5-4b6b-b380-17d6d8f09ba7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c1aa83b7-6b64-4b08-bd57-d93973ca569a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:32:50.517 12:16:55 -- bdev/blockdev.sh@747 -- # jq -r .name 00:32:50.517 12:16:55 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:32:50.517 12:16:55 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:32:50.517 12:16:55 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:32:50.517 12:16:55 -- bdev/blockdev.sh@752 -- # killprocess 151288 00:32:50.517 12:16:55 -- common/autotest_common.sh@936 -- # '[' -z 151288 ']' 00:32:50.517 12:16:55 -- common/autotest_common.sh@940 -- # kill -0 151288 00:32:50.517 12:16:55 -- common/autotest_common.sh@941 -- # uname 00:32:50.517 12:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:50.517 12:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151288 00:32:50.517 12:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:50.517 killing process with pid 151288 00:32:50.517 12:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:50.517 12:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151288' 00:32:50.517 12:16:55 -- common/autotest_common.sh@955 -- # kill 151288 00:32:50.517 12:16:55 -- common/autotest_common.sh@960 -- # wait 151288 00:32:51.086 12:16:56 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:51.086 12:16:56 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:32:51.086 12:16:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:32:51.086 12:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:51.086 12:16:56 -- common/autotest_common.sh@10 -- # set +x 00:32:51.086 ************************************ 00:32:51.087 START TEST bdev_hello_world 00:32:51.087 ************************************ 00:32:51.087 12:16:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:32:51.087 [2024-11-29 12:16:56.508761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:51.087 [2024-11-29 12:16:56.509053] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151336 ] 00:32:51.345 [2024-11-29 12:16:56.657481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.345 [2024-11-29 12:16:56.746441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.615 [2024-11-29 12:16:56.970103] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:51.615 [2024-11-29 12:16:56.970203] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:32:51.615 [2024-11-29 12:16:56.970247] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:51.615 [2024-11-29 12:16:56.970688] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:51.615 [2024-11-29 12:16:56.970867] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:51.615 [2024-11-29 12:16:56.970925] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:51.615 [2024-11-29 12:16:56.971021] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:51.615 00:32:51.615 [2024-11-29 12:16:56.971066] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:51.897 00:32:51.897 real 0m0.832s 00:32:51.897 user 0m0.495s 00:32:51.897 sys 0m0.224s 00:32:51.897 12:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:51.897 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:32:51.897 ************************************ 00:32:51.897 END TEST bdev_hello_world 00:32:51.897 ************************************ 00:32:51.897 12:16:57 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:32:51.897 12:16:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:51.897 12:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:51.897 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:32:51.897 ************************************ 00:32:51.897 START TEST bdev_bounds 00:32:51.897 ************************************ 00:32:51.897 12:16:57 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:32:51.897 12:16:57 -- bdev/blockdev.sh@288 -- # bdevio_pid=151374 00:32:51.897 12:16:57 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:51.897 12:16:57 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:51.897 Process bdevio pid: 151374 00:32:51.897 12:16:57 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 151374' 00:32:51.897 12:16:57 -- bdev/blockdev.sh@291 -- # waitforlisten 151374 00:32:51.897 12:16:57 -- common/autotest_common.sh@829 -- # '[' -z 151374 ']' 00:32:51.897 12:16:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.897 12:16:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.897 12:16:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.897 12:16:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.897 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:32:51.897 [2024-11-29 12:16:57.391510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:51.897 [2024-11-29 12:16:57.391748] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151374 ] 00:32:52.156 [2024-11-29 12:16:57.560617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:52.156 [2024-11-29 12:16:57.655521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.156 [2024-11-29 12:16:57.655661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.156 [2024-11-29 12:16:57.655664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.091 12:16:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.091 12:16:58 -- common/autotest_common.sh@862 -- # return 0 00:32:53.091 12:16:58 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:53.091 I/O targets: 00:32:53.091 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:32:53.091 00:32:53.091 00:32:53.091 CUnit - A unit testing framework for C - Version 2.1-3 00:32:53.091 http://cunit.sourceforge.net/ 00:32:53.091 00:32:53.091 00:32:53.091 Suite: bdevio tests on: raid5f 00:32:53.091 Test: blockdev write read block ...passed 00:32:53.091 Test: blockdev write zeroes read block ...passed 00:32:53.091 Test: blockdev write zeroes read no split ...passed 00:32:53.091 Test: blockdev write zeroes read split ...passed 00:32:53.350 Test: blockdev write zeroes read split partial ...passed 00:32:53.350 Test: blockdev reset ...passed 00:32:53.350 Test: blockdev write read 8 blocks ...passed 00:32:53.350 Test: blockdev write read size > 128k ...passed 00:32:53.350 Test: blockdev write read invalid size ...passed 00:32:53.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:53.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:53.350 Test: blockdev write read max offset ...passed 00:32:53.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:53.350 Test: blockdev writev readv 8 blocks ...passed 00:32:53.350 Test: blockdev writev readv 30 x 1block ...passed 00:32:53.350 Test: blockdev writev readv block ...passed 00:32:53.350 Test: blockdev writev readv size > 128k ...passed 00:32:53.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:53.350 Test: blockdev comparev and writev ...passed 00:32:53.350 Test: blockdev nvme passthru rw ...passed 00:32:53.350 Test: blockdev nvme passthru vendor specific ...passed 00:32:53.350 Test: blockdev nvme admin passthru ...passed 00:32:53.350 Test: blockdev copy ...passed 00:32:53.350 00:32:53.350 Run Summary: Type Total Ran Passed Failed Inactive 00:32:53.350 suites 1 1 n/a 0 0 00:32:53.350 tests 23 23 23 0 0 00:32:53.350 asserts 130 130 130 0 n/a 00:32:53.350 00:32:53.350 Elapsed time = 0.355 seconds 00:32:53.350 0 00:32:53.350 12:16:58 -- bdev/blockdev.sh@293 -- # killprocess 151374 00:32:53.350 12:16:58 -- common/autotest_common.sh@936 -- # '[' -z 151374 ']' 00:32:53.350 12:16:58 -- common/autotest_common.sh@940 -- # kill -0 151374 00:32:53.350 12:16:58 -- common/autotest_common.sh@941 -- # uname 00:32:53.350 12:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:53.350 12:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151374 00:32:53.350 12:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:53.350 12:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:53.350 killing process with pid 151374 00:32:53.350 12:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151374' 00:32:53.350 12:16:58 -- common/autotest_common.sh@955 -- # kill 151374 00:32:53.350 12:16:58 -- common/autotest_common.sh@960 -- # wait 151374 00:32:53.609 12:16:58 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:32:53.609 00:32:53.609 real 0m1.652s 00:32:53.609 user 0m4.084s 00:32:53.609 sys 0m0.325s 00:32:53.609 12:16:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:53.609 12:16:58 -- common/autotest_common.sh@10 -- # set +x 00:32:53.609 ************************************ 00:32:53.609 END TEST bdev_bounds 00:32:53.609 ************************************ 00:32:53.609 12:16:59 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:32:53.609 12:16:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:32:53.609 12:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:53.609 12:16:59 -- common/autotest_common.sh@10 -- # set +x 00:32:53.609 ************************************ 00:32:53.609 START TEST bdev_nbd 00:32:53.609 ************************************ 00:32:53.609 12:16:59 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:32:53.609 12:16:59 -- bdev/blockdev.sh@298 -- # uname -s 00:32:53.609 12:16:59 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:32:53.609 12:16:59 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:53.609 12:16:59 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:53.609 12:16:59 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:32:53.609 12:16:59 -- bdev/blockdev.sh@302 -- # local bdev_all 00:32:53.609 12:16:59 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:32:53.609 12:16:59 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:32:53.609 12:16:59 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:53.609 12:16:59 -- bdev/blockdev.sh@309 -- # local nbd_all 00:32:53.609 12:16:59 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:32:53.609 12:16:59 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:32:53.609 12:16:59 -- bdev/blockdev.sh@312 -- # local nbd_list 00:32:53.609 12:16:59 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:32:53.609 12:16:59 -- bdev/blockdev.sh@313 -- # local bdev_list 00:32:53.609 12:16:59 -- bdev/blockdev.sh@316 -- # nbd_pid=151425 00:32:53.610 12:16:59 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:53.610 12:16:59 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:53.610 12:16:59 -- bdev/blockdev.sh@318 -- # waitforlisten 151425 /var/tmp/spdk-nbd.sock 00:32:53.610 12:16:59 -- common/autotest_common.sh@829 -- # '[' -z 151425 ']' 00:32:53.610 12:16:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:53.610 12:16:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:53.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:53.610 12:16:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:53.610 12:16:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:53.610 12:16:59 -- common/autotest_common.sh@10 -- # set +x 00:32:53.610 [2024-11-29 12:16:59.108522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:53.610 [2024-11-29 12:16:59.108782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.868 [2024-11-29 12:16:59.254167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.868 [2024-11-29 12:16:59.352083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.804 12:17:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.804 12:17:00 -- common/autotest_common.sh@862 -- # return 0 00:32:54.804 12:17:00 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@24 -- # local i 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:54.804 12:17:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:32:55.063 12:17:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:55.063 12:17:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:55.063 12:17:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:55.063 12:17:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:55.063 12:17:00 -- common/autotest_common.sh@867 -- # local i 00:32:55.063 12:17:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:55.063 12:17:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:55.063 12:17:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:55.063 12:17:00 -- common/autotest_common.sh@871 -- # break 00:32:55.063 12:17:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:55.063 12:17:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:55.063 12:17:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:55.063 1+0 records in 00:32:55.063 1+0 records out 00:32:55.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467729 s, 8.8 MB/s 00:32:55.063 12:17:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:55.063 12:17:00 -- common/autotest_common.sh@884 -- # size=4096 00:32:55.063 12:17:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:55.063 12:17:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:55.063 12:17:00 -- common/autotest_common.sh@887 -- # return 0 00:32:55.064 12:17:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:55.064 12:17:00 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:55.064 12:17:00 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:55.322 12:17:00 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:55.322 { 00:32:55.322 "nbd_device": "/dev/nbd0", 00:32:55.322 "bdev_name": "raid5f" 00:32:55.322 } 00:32:55.322 ]' 00:32:55.322 12:17:00 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:55.322 12:17:00 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:55.322 12:17:00 -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:55.323 { 00:32:55.323 "nbd_device": "/dev/nbd0", 00:32:55.323 "bdev_name": "raid5f" 00:32:55.323 } 00:32:55.323 ]' 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@51 -- # local i 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:55.323 12:17:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@41 -- # break 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@45 -- # return 0 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.581 12:17:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@65 -- # true 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@65 -- # count=0 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@122 -- # count=0 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@127 -- # return 0 00:32:55.839 12:17:01 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@12 -- # local i 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:55.839 12:17:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:32:56.096 /dev/nbd0 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:56.096 12:17:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:56.096 12:17:01 -- common/autotest_common.sh@867 -- # local i 00:32:56.096 12:17:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:56.096 12:17:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:56.096 12:17:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:56.096 12:17:01 -- common/autotest_common.sh@871 -- # break 00:32:56.096 12:17:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:56.096 12:17:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:56.096 12:17:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:56.096 1+0 records in 00:32:56.096 1+0 records out 00:32:56.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338832 s, 12.1 MB/s 00:32:56.096 12:17:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:56.096 12:17:01 -- common/autotest_common.sh@884 -- # size=4096 00:32:56.096 12:17:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:56.096 12:17:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:56.096 12:17:01 -- common/autotest_common.sh@887 -- # return 0 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:56.096 12:17:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:56.353 12:17:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:56.353 { 00:32:56.353 "nbd_device": "/dev/nbd0", 00:32:56.353 "bdev_name": "raid5f" 00:32:56.353 } 00:32:56.353 ]' 00:32:56.353 12:17:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:56.353 { 00:32:56.353 "nbd_device": "/dev/nbd0", 00:32:56.353 "bdev_name": "raid5f" 00:32:56.353 } 00:32:56.353 ]' 00:32:56.353 12:17:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@65 -- # count=1 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@66 -- # echo 1 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@95 -- # count=1 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:56.611 256+0 records in 00:32:56.611 256+0 records out 00:32:56.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00900079 s, 116 MB/s 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:56.611 256+0 records in 00:32:56.611 256+0 records out 00:32:56.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295122 s, 35.5 MB/s 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:56.611 12:17:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@51 -- # local i 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:56.612 12:17:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@41 -- # break 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@45 -- # return 0 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:56.870 12:17:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@65 -- # true 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@65 -- # count=0 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@104 -- # count=0 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@109 -- # return 0 00:32:57.128 12:17:02 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:57.128 12:17:02 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:57.387 malloc_lvol_verify 00:32:57.387 12:17:02 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:57.645 a87f4d4a-7632-4cd5-b278-354a2d560367 00:32:57.645 12:17:03 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:57.903 83d76d56-70cc-4bf2-8978-d1cc3aa8005d 00:32:57.903 12:17:03 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:58.160 /dev/nbd0 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:58.160 mke2fs 1.46.5 (30-Dec-2021) 00:32:58.160 00:32:58.160 Filesystem too small for a journal 00:32:58.160 Discarding device blocks: 0/1024 done 00:32:58.160 Creating filesystem with 1024 4k blocks and 1024 inodes 00:32:58.160 00:32:58.160 Allocating group tables: 0/1 done 00:32:58.160 Writing inode tables: 0/1 done 00:32:58.160 Writing superblocks and filesystem accounting information: 0/1 done 00:32:58.160 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@51 -- # local i 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:58.160 12:17:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:58.418 12:17:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:58.418 12:17:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@41 -- # break 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@45 -- # return 0 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:58.419 12:17:03 -- bdev/nbd_common.sh@147 -- # return 0 00:32:58.419 12:17:03 -- bdev/blockdev.sh@324 -- # killprocess 151425 00:32:58.419 12:17:03 -- common/autotest_common.sh@936 -- # '[' -z 151425 ']' 00:32:58.419 12:17:03 -- common/autotest_common.sh@940 -- # kill -0 151425 00:32:58.419 12:17:03 -- common/autotest_common.sh@941 -- # uname 00:32:58.419 12:17:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:58.419 12:17:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151425 00:32:58.419 12:17:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:58.419 12:17:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:58.419 killing process with pid 151425 00:32:58.419 12:17:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151425' 00:32:58.419 12:17:03 -- common/autotest_common.sh@955 -- # kill 151425 00:32:58.419 12:17:03 -- common/autotest_common.sh@960 -- # wait 151425 00:32:58.985 12:17:04 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:32:58.985 00:32:58.985 real 0m5.196s 00:32:58.985 user 0m8.037s 00:32:58.985 sys 0m1.185s 00:32:58.985 12:17:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:58.985 12:17:04 -- common/autotest_common.sh@10 -- # set +x 00:32:58.985 ************************************ 00:32:58.985 END TEST bdev_nbd 00:32:58.985 ************************************ 00:32:58.985 12:17:04 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:32:58.985 12:17:04 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:32:58.985 12:17:04 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:32:58.985 12:17:04 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:32:58.985 12:17:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:58.986 12:17:04 -- common/autotest_common.sh@10 -- # set +x 00:32:58.986 ************************************ 00:32:58.986 START TEST bdev_fio 00:32:58.986 ************************************ 00:32:58.986 12:17:04 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:32:58.986 12:17:04 -- bdev/blockdev.sh@329 -- # local env_context 00:32:58.986 12:17:04 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:32:58.986 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:32:58.986 12:17:04 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:58.986 12:17:04 -- bdev/blockdev.sh@337 -- # echo '' 00:32:58.986 12:17:04 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:32:58.986 12:17:04 -- bdev/blockdev.sh@337 -- # env_context= 00:32:58.986 12:17:04 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:58.986 12:17:04 -- common/autotest_common.sh@1270 -- # local workload=verify 00:32:58.986 12:17:04 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:32:58.986 12:17:04 -- common/autotest_common.sh@1272 -- # local env_context= 00:32:58.986 12:17:04 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:32:58.986 12:17:04 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:58.986 12:17:04 -- common/autotest_common.sh@1290 -- # cat 00:32:58.986 12:17:04 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1303 -- # cat 00:32:58.986 12:17:04 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:32:58.986 12:17:04 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:58.986 12:17:04 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:32:58.986 12:17:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:58.986 12:17:04 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:32:58.986 12:17:04 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:32:58.986 12:17:04 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:32:58.986 12:17:04 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.986 12:17:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:58.986 12:17:04 -- common/autotest_common.sh@10 -- # set +x 00:32:58.986 ************************************ 00:32:58.986 START TEST bdev_fio_rw_verify 00:32:58.986 ************************************ 00:32:58.986 12:17:04 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.986 12:17:04 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.986 12:17:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:32:58.986 12:17:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.986 12:17:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:32:58.986 12:17:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:58.986 12:17:04 -- common/autotest_common.sh@1330 -- # shift 00:32:58.986 12:17:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:32:58.986 12:17:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.986 12:17:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:58.986 12:17:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:32:58.986 12:17:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:32:58.986 12:17:04 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:32:58.986 12:17:04 -- common/autotest_common.sh@1336 -- # break 00:32:58.986 12:17:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:58.986 12:17:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:59.259 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:59.259 fio-3.35 00:32:59.259 Starting 1 thread 00:33:11.486 00:33:11.486 job_raid5f: (groupid=0, jobs=1): err= 0: pid=151660: Fri Nov 29 12:17:15 2024 00:33:11.486 read: IOPS=9425, BW=36.8MiB/s (38.6MB/s)(368MiB/10001msec) 00:33:11.486 slat (usec): min=22, max=366, avg=25.69, stdev= 5.41 00:33:11.486 clat (usec): min=13, max=767, avg=168.16, stdev=62.83 00:33:11.486 lat (usec): min=38, max=792, avg=193.85, stdev=63.71 00:33:11.486 clat percentiles (usec): 00:33:11.486 | 50.000th=[ 174], 99.000th=[ 306], 99.900th=[ 449], 99.990th=[ 676], 00:33:11.486 | 99.999th=[ 766] 00:33:11.486 write: IOPS=9881, BW=38.6MiB/s (40.5MB/s)(382MiB/9884msec); 0 zone resets 00:33:11.486 slat (usec): min=10, max=452, avg=22.01, stdev= 5.89 00:33:11.486 clat (usec): min=72, max=1853, avg=385.32, stdev=60.15 00:33:11.486 lat (usec): min=92, max=1876, avg=407.33, stdev=61.89 00:33:11.486 clat percentiles (usec): 00:33:11.486 | 50.000th=[ 388], 99.000th=[ 562], 99.900th=[ 832], 99.990th=[ 1598], 00:33:11.486 | 99.999th=[ 1860] 00:33:11.486 bw ( KiB/s): min=33992, max=42344, per=98.71%, avg=39015.16, stdev=2101.68, samples=19 00:33:11.486 iops : min= 8498, max=10586, avg=9753.79, stdev=525.42, samples=19 00:33:11.486 lat (usec) : 20=0.01%, 50=0.01%, 100=10.59%, 250=34.39%, 500=53.48% 00:33:11.486 lat (usec) : 750=1.45%, 1000=0.06% 00:33:11.486 lat (msec) : 2=0.02% 00:33:11.486 cpu : usr=99.33%, sys=0.59%, ctx=124, majf=0, minf=9779 00:33:11.486 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.486 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.486 issued rwts: total=94264,97669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.486 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:11.486 00:33:11.486 Run status group 0 (all jobs): 00:33:11.486 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=368MiB (386MB), run=10001-10001msec 00:33:11.486 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=382MiB (400MB), run=9884-9884msec 00:33:11.486 ----------------------------------------------------- 00:33:11.486 Suppressions used: 00:33:11.486 count bytes template 00:33:11.486 1 7 /usr/src/fio/parse.c 00:33:11.486 477 45792 /usr/src/fio/iolog.c 00:33:11.486 1 904 libcrypto.so 00:33:11.486 ----------------------------------------------------- 00:33:11.486 00:33:11.486 00:33:11.486 real 0m11.298s 00:33:11.486 user 0m11.894s 00:33:11.486 sys 0m0.764s 00:33:11.486 12:17:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:11.486 12:17:15 -- common/autotest_common.sh@10 -- # set +x 00:33:11.486 ************************************ 00:33:11.486 END TEST bdev_fio_rw_verify 00:33:11.486 ************************************ 00:33:11.486 12:17:15 -- bdev/blockdev.sh@348 -- # rm -f 00:33:11.486 12:17:15 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.486 12:17:15 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.486 12:17:15 -- common/autotest_common.sh@1270 -- # local workload=trim 00:33:11.486 12:17:15 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:33:11.486 12:17:15 -- common/autotest_common.sh@1272 -- # local env_context= 00:33:11.486 12:17:15 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:33:11.486 12:17:15 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.486 12:17:15 -- common/autotest_common.sh@1290 -- # cat 00:33:11.486 12:17:15 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:33:11.486 12:17:15 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "24d5822e-7daa-47ee-951c-9aa84af5bda5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "24d5822e-7daa-47ee-951c-9aa84af5bda5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "24d5822e-7daa-47ee-951c-9aa84af5bda5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "11ccf217-924e-427f-adec-2608c8bd0d35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3a91659b-2fa5-4b6b-b380-17d6d8f09ba7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c1aa83b7-6b64-4b08-bd57-d93973ca569a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:11.486 12:17:15 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:11.486 12:17:15 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:33:11.486 12:17:15 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.486 /home/vagrant/spdk_repo/spdk 00:33:11.486 12:17:15 -- bdev/blockdev.sh@360 -- # popd 00:33:11.486 12:17:15 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:33:11.486 12:17:15 -- bdev/blockdev.sh@362 -- # return 0 00:33:11.486 00:33:11.486 real 0m11.477s 00:33:11.486 user 0m11.990s 00:33:11.486 sys 0m0.847s 00:33:11.486 12:17:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:11.486 12:17:15 -- common/autotest_common.sh@10 -- # set +x 00:33:11.486 ************************************ 00:33:11.486 END TEST bdev_fio 00:33:11.486 ************************************ 00:33:11.486 12:17:15 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:11.486 12:17:15 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:33:11.486 12:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:11.486 12:17:15 -- common/autotest_common.sh@10 -- # set +x 00:33:11.486 ************************************ 00:33:11.486 START TEST bdev_verify 00:33:11.486 ************************************ 00:33:11.486 12:17:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:11.486 [2024-11-29 12:17:15.878649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:33:11.487 [2024-11-29 12:17:15.878861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151823 ] 00:33:11.487 [2024-11-29 12:17:16.035878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:11.487 [2024-11-29 12:17:16.129179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.487 [2024-11-29 12:17:16.129185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.487 Running I/O for 5 seconds... 00:33:16.779 00:33:16.779 Latency(us) 00:33:16.779 [2024-11-29T12:17:22.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.779 [2024-11-29T12:17:22.290Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:16.779 Verification LBA range: start 0x0 length 0x2000 00:33:16.779 raid5f : 5.01 10057.14 39.29 0.00 0.00 20164.88 226.21 16801.05 00:33:16.779 [2024-11-29T12:17:22.290Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:16.779 Verification LBA range: start 0x2000 length 0x2000 00:33:16.779 raid5f : 5.01 10023.55 39.15 0.00 0.00 20232.71 221.56 16801.05 00:33:16.779 [2024-11-29T12:17:22.290Z] =================================================================================================================== 00:33:16.779 [2024-11-29T12:17:22.290Z] Total : 20080.69 78.44 0.00 0.00 20198.74 221.56 16801.05 00:33:16.779 00:33:16.779 real 0m5.844s 00:33:16.779 user 0m10.909s 00:33:16.779 sys 0m0.232s 00:33:16.779 12:17:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:16.779 ************************************ 00:33:16.779 END TEST bdev_verify 00:33:16.779 ************************************ 00:33:16.779 12:17:21 -- common/autotest_common.sh@10 -- # set +x 00:33:16.779 12:17:21 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:16.779 12:17:21 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:33:16.779 12:17:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:16.779 12:17:21 -- common/autotest_common.sh@10 -- # set +x 00:33:16.779 ************************************ 00:33:16.779 START TEST bdev_verify_big_io 00:33:16.779 ************************************ 00:33:16.779 12:17:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:16.779 [2024-11-29 12:17:21.776043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:33:16.779 [2024-11-29 12:17:21.776318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151919 ] 00:33:16.779 [2024-11-29 12:17:21.931105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:16.779 [2024-11-29 12:17:22.034188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.779 [2024-11-29 12:17:22.034200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.779 Running I/O for 5 seconds... 00:33:22.049 00:33:22.049 Latency(us) 00:33:22.049 [2024-11-29T12:17:27.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.049 [2024-11-29T12:17:27.560Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:22.049 Verification LBA range: start 0x0 length 0x200 00:33:22.049 raid5f : 5.15 668.88 41.81 0.00 0.00 4991342.32 305.34 158239.65 00:33:22.049 [2024-11-29T12:17:27.560Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:22.049 Verification LBA range: start 0x200 length 0x200 00:33:22.049 raid5f : 5.16 671.91 41.99 0.00 0.00 4967470.20 182.46 152520.15 00:33:22.049 [2024-11-29T12:17:27.560Z] =================================================================================================================== 00:33:22.049 [2024-11-29T12:17:27.560Z] Total : 1340.80 83.80 0.00 0.00 4979368.26 182.46 158239.65 00:33:22.309 00:33:22.309 real 0m6.013s 00:33:22.309 user 0m11.198s 00:33:22.309 sys 0m0.272s 00:33:22.309 12:17:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:22.309 12:17:27 -- common/autotest_common.sh@10 -- # set +x 00:33:22.309 ************************************ 00:33:22.309 END TEST bdev_verify_big_io 00:33:22.309 ************************************ 00:33:22.309 12:17:27 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:22.309 12:17:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:22.309 12:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:22.309 12:17:27 -- common/autotest_common.sh@10 -- # set +x 00:33:22.309 ************************************ 00:33:22.309 START TEST bdev_write_zeroes 00:33:22.309 ************************************ 00:33:22.309 12:17:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:22.568 [2024-11-29 12:17:27.841876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:33:22.568 [2024-11-29 12:17:27.842319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152007 ] 00:33:22.568 [2024-11-29 12:17:27.993858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.827 [2024-11-29 12:17:28.093263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.827 Running I/O for 1 seconds... 00:33:24.203 00:33:24.203 Latency(us) 00:33:24.203 [2024-11-29T12:17:29.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.203 [2024-11-29T12:17:29.714Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:24.203 raid5f : 1.01 20589.46 80.43 0.00 0.00 6192.70 1720.32 7119.59 00:33:24.203 [2024-11-29T12:17:29.714Z] =================================================================================================================== 00:33:24.203 [2024-11-29T12:17:29.714Z] Total : 20589.46 80.43 0.00 0.00 6192.70 1720.32 7119.59 00:33:24.203 00:33:24.203 real 0m1.864s 00:33:24.203 user 0m1.488s 00:33:24.203 sys 0m0.262s 00:33:24.203 12:17:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:24.203 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:33:24.203 ************************************ 00:33:24.203 END TEST bdev_write_zeroes 00:33:24.203 ************************************ 00:33:24.203 12:17:29 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:24.203 12:17:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:24.203 12:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:24.203 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:33:24.203 ************************************ 00:33:24.203 START TEST bdev_json_nonenclosed 00:33:24.203 ************************************ 00:33:24.203 12:17:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:24.462 [2024-11-29 12:17:29.754342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:33:24.462 [2024-11-29 12:17:29.754624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152052 ] 00:33:24.462 [2024-11-29 12:17:29.897533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.721 [2024-11-29 12:17:29.995781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.721 [2024-11-29 12:17:29.996035] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:24.721 [2024-11-29 12:17:29.996083] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:24.721 00:33:24.721 real 0m0.425s 00:33:24.721 user 0m0.205s 00:33:24.721 sys 0m0.120s 00:33:24.721 12:17:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:24.721 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:33:24.721 ************************************ 00:33:24.721 END TEST bdev_json_nonenclosed 00:33:24.721 ************************************ 00:33:24.721 12:17:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:24.721 12:17:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:24.721 12:17:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:24.721 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:33:24.721 ************************************ 00:33:24.721 START TEST bdev_json_nonarray 00:33:24.721 ************************************ 00:33:24.721 12:17:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:24.980 [2024-11-29 12:17:30.240819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:33:24.980 [2024-11-29 12:17:30.241053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152083 ] 00:33:24.980 [2024-11-29 12:17:30.391407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.980 [2024-11-29 12:17:30.489786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.980 [2024-11-29 12:17:30.490034] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:24.980 [2024-11-29 12:17:30.490078] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:25.240 00:33:25.240 real 0m0.433s 00:33:25.240 user 0m0.233s 00:33:25.240 sys 0m0.100s 00:33:25.240 12:17:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:25.240 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:33:25.240 ************************************ 00:33:25.240 END TEST bdev_json_nonarray 00:33:25.240 ************************************ 00:33:25.240 12:17:30 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:33:25.240 12:17:30 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:33:25.240 12:17:30 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:33:25.240 12:17:30 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:33:25.240 12:17:30 -- bdev/blockdev.sh@809 -- # cleanup 00:33:25.240 12:17:30 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:25.240 12:17:30 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:25.240 12:17:30 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:33:25.240 12:17:30 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:33:25.240 12:17:30 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:33:25.240 12:17:30 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:33:25.240 00:33:25.240 real 0m36.138s 00:33:25.240 user 0m50.871s 00:33:25.240 sys 0m4.272s 00:33:25.240 12:17:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:25.240 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:33:25.240 ************************************ 00:33:25.240 END TEST blockdev_raid5f 00:33:25.240 ************************************ 00:33:25.240 12:17:30 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:33:25.240 12:17:30 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:33:25.240 12:17:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:25.240 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:33:25.240 12:17:30 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:33:25.240 12:17:30 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:33:25.240 12:17:30 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:33:25.240 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:33:27.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:27.144 Waiting for block devices as requested 00:33:27.144 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:33:27.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:27.404 Cleaning 00:33:27.404 Removing: /var/run/dpdk/spdk0/config 00:33:27.404 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:27.404 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:27.404 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:27.404 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:27.404 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:27.404 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:27.404 Removing: /dev/shm/spdk_tgt_trace.pid115489 00:33:27.404 Removing: /var/run/dpdk/spdk0 00:33:27.404 Removing: /var/run/dpdk/spdk_pid115299 00:33:27.404 Removing: /var/run/dpdk/spdk_pid115489 00:33:27.404 Removing: /var/run/dpdk/spdk_pid115773 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116031 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116204 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116282 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116376 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116488 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116577 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116623 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116668 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116740 00:33:27.404 Removing: /var/run/dpdk/spdk_pid116853 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117393 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117441 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117501 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117522 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117596 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117617 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117694 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117709 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117766 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117789 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117834 00:33:27.404 Removing: /var/run/dpdk/spdk_pid117857 00:33:27.663 Removing: /var/run/dpdk/spdk_pid117999 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118042 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118085 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118171 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118236 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118266 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118347 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118370 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118417 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118440 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118485 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118508 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118554 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118576 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118618 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118646 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118686 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118721 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118754 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118788 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118822 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118854 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118892 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118922 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118960 00:33:27.663 Removing: /var/run/dpdk/spdk_pid118988 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119027 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119058 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119091 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119126 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119166 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119194 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119234 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119262 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119304 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119329 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119372 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119395 00:33:27.663 Removing: /var/run/dpdk/spdk_pid119440 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119466 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119517 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119550 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119592 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119622 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119661 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119690 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119725 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119813 00:33:27.664 Removing: /var/run/dpdk/spdk_pid119929 00:33:27.664 Removing: /var/run/dpdk/spdk_pid120113 00:33:27.664 Removing: /var/run/dpdk/spdk_pid120166 00:33:27.664 Removing: /var/run/dpdk/spdk_pid120211 00:33:27.664 Removing: /var/run/dpdk/spdk_pid121406 00:33:27.664 Removing: /var/run/dpdk/spdk_pid121608 00:33:27.664 Removing: /var/run/dpdk/spdk_pid121797 00:33:27.664 Removing: /var/run/dpdk/spdk_pid121905 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122013 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122063 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122101 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122123 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122593 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122676 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122779 00:33:27.664 Removing: /var/run/dpdk/spdk_pid122831 00:33:27.664 Removing: /var/run/dpdk/spdk_pid124001 00:33:27.664 Removing: /var/run/dpdk/spdk_pid124897 00:33:27.664 Removing: /var/run/dpdk/spdk_pid125799 00:33:27.664 Removing: /var/run/dpdk/spdk_pid126938 00:33:27.664 Removing: /var/run/dpdk/spdk_pid128022 00:33:27.664 Removing: /var/run/dpdk/spdk_pid129118 00:33:27.664 Removing: /var/run/dpdk/spdk_pid130653 00:33:27.664 Removing: /var/run/dpdk/spdk_pid131904 00:33:27.664 Removing: /var/run/dpdk/spdk_pid133137 00:33:27.664 Removing: /var/run/dpdk/spdk_pid133820 00:33:27.664 Removing: /var/run/dpdk/spdk_pid134369 00:33:27.664 Removing: /var/run/dpdk/spdk_pid135006 00:33:27.664 Removing: /var/run/dpdk/spdk_pid135470 00:33:27.664 Removing: /var/run/dpdk/spdk_pid136017 00:33:27.664 Removing: /var/run/dpdk/spdk_pid136581 00:33:27.664 Removing: /var/run/dpdk/spdk_pid137252 00:33:27.664 Removing: /var/run/dpdk/spdk_pid137757 00:33:27.664 Removing: /var/run/dpdk/spdk_pid139134 00:33:27.664 Removing: /var/run/dpdk/spdk_pid139747 00:33:27.664 Removing: /var/run/dpdk/spdk_pid140287 00:33:27.664 Removing: /var/run/dpdk/spdk_pid141811 00:33:27.664 Removing: /var/run/dpdk/spdk_pid142499 00:33:27.664 Removing: /var/run/dpdk/spdk_pid143101 00:33:27.664 Removing: /var/run/dpdk/spdk_pid143885 00:33:27.664 Removing: /var/run/dpdk/spdk_pid143930 00:33:27.664 Removing: /var/run/dpdk/spdk_pid143969 00:33:27.664 Removing: /var/run/dpdk/spdk_pid144020 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144152 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144297 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144535 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144829 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144844 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144889 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144907 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144928 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144948 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144969 00:33:27.923 Removing: /var/run/dpdk/spdk_pid144986 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145007 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145027 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145047 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145067 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145083 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145104 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145124 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145143 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145153 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145182 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145195 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145211 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145251 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145274 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145305 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145384 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145417 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145432 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145470 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145475 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145494 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145548 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145559 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145591 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145612 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145617 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145634 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145646 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145656 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145672 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145685 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145724 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145750 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145770 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145806 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145818 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145830 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145886 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145894 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145930 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145944 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145956 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145968 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145985 00:33:27.923 Removing: /var/run/dpdk/spdk_pid145990 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146007 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146023 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146111 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146173 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146291 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146314 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146362 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146409 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146442 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146457 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146486 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146515 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146533 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146615 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146668 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146717 00:33:27.923 Removing: /var/run/dpdk/spdk_pid146978 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147087 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147129 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147225 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147283 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147321 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147567 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147754 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147849 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147894 00:33:27.923 Removing: /var/run/dpdk/spdk_pid147924 00:33:27.923 Removing: /var/run/dpdk/spdk_pid148009 00:33:27.923 Removing: /var/run/dpdk/spdk_pid148427 00:33:27.923 Removing: /var/run/dpdk/spdk_pid148458 00:33:27.923 Removing: /var/run/dpdk/spdk_pid148766 00:33:27.923 Removing: /var/run/dpdk/spdk_pid148862 00:33:27.923 Removing: /var/run/dpdk/spdk_pid148956 00:33:27.923 Removing: /var/run/dpdk/spdk_pid149003 00:33:27.923 Removing: /var/run/dpdk/spdk_pid149025 00:33:27.923 Removing: /var/run/dpdk/spdk_pid149063 00:33:27.923 Removing: /var/run/dpdk/spdk_pid150395 00:33:27.923 Removing: /var/run/dpdk/spdk_pid150509 00:33:27.923 Removing: /var/run/dpdk/spdk_pid150518 00:33:28.181 Removing: /var/run/dpdk/spdk_pid150545 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151052 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151147 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151288 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151336 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151374 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151645 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151823 00:33:28.181 Removing: /var/run/dpdk/spdk_pid151919 00:33:28.181 Removing: /var/run/dpdk/spdk_pid152007 00:33:28.181 Removing: /var/run/dpdk/spdk_pid152052 00:33:28.181 Removing: /var/run/dpdk/spdk_pid152083 00:33:28.181 Clean 00:33:28.181 killing process with pid 104433 00:33:28.181 killing process with pid 104436 00:33:28.181 12:17:33 -- common/autotest_common.sh@1446 -- # return 0 00:33:28.181 12:17:33 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:33:28.181 12:17:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.181 12:17:33 -- common/autotest_common.sh@10 -- # set +x 00:33:28.181 12:17:33 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:33:28.181 12:17:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.181 12:17:33 -- common/autotest_common.sh@10 -- # set +x 00:33:28.181 12:17:33 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:28.181 12:17:33 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:28.181 12:17:33 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:28.181 12:17:33 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:33:28.181 12:17:33 -- spdk/autotest.sh@383 -- # hostname 00:33:28.181 12:17:33 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:28.440 geninfo: WARNING: invalid characters removed from testname! 00:34:15.221 12:18:16 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:16.156 12:18:21 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:19.442 12:18:24 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:22.733 12:18:27 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:26.025 12:18:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:29.312 12:18:34 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:31.843 12:18:37 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:32.102 12:18:37 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:34:32.102 12:18:37 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:34:32.102 12:18:37 -- common/autotest_common.sh@1690 -- $ lcov --version 00:34:32.102 12:18:37 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:34:32.102 12:18:37 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:34:32.102 12:18:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:34:32.102 12:18:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:34:32.102 12:18:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:34:32.102 12:18:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:34:32.102 12:18:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:32.102 12:18:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:34:32.102 12:18:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:34:32.102 12:18:37 -- scripts/common.sh@339 -- $ ver1_l=2 00:34:32.102 12:18:37 -- scripts/common.sh@340 -- $ ver2_l=1 00:34:32.102 12:18:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:34:32.102 12:18:37 -- scripts/common.sh@343 -- $ case "$op" in 00:34:32.102 12:18:37 -- scripts/common.sh@344 -- $ : 1 00:34:32.102 12:18:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:34:32.102 12:18:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.102 12:18:37 -- scripts/common.sh@364 -- $ decimal 1 00:34:32.102 12:18:37 -- scripts/common.sh@352 -- $ local d=1 00:34:32.102 12:18:37 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:32.102 12:18:37 -- scripts/common.sh@354 -- $ echo 1 00:34:32.102 12:18:37 -- scripts/common.sh@364 -- $ ver1[v]=1 00:34:32.102 12:18:37 -- scripts/common.sh@365 -- $ decimal 2 00:34:32.102 12:18:37 -- scripts/common.sh@352 -- $ local d=2 00:34:32.102 12:18:37 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:32.102 12:18:37 -- scripts/common.sh@354 -- $ echo 2 00:34:32.102 12:18:37 -- scripts/common.sh@365 -- $ ver2[v]=2 00:34:32.102 12:18:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:34:32.102 12:18:37 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:34:32.102 12:18:37 -- scripts/common.sh@367 -- $ return 0 00:34:32.102 12:18:37 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.102 12:18:37 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:34:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.102 --rc genhtml_branch_coverage=1 00:34:32.102 --rc genhtml_function_coverage=1 00:34:32.102 --rc genhtml_legend=1 00:34:32.102 --rc geninfo_all_blocks=1 00:34:32.102 --rc geninfo_unexecuted_blocks=1 00:34:32.102 00:34:32.102 ' 00:34:32.102 12:18:37 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:34:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.102 --rc genhtml_branch_coverage=1 00:34:32.102 --rc genhtml_function_coverage=1 00:34:32.102 --rc genhtml_legend=1 00:34:32.102 --rc geninfo_all_blocks=1 00:34:32.102 --rc geninfo_unexecuted_blocks=1 00:34:32.102 00:34:32.102 ' 00:34:32.102 12:18:37 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:34:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.102 --rc genhtml_branch_coverage=1 00:34:32.102 --rc genhtml_function_coverage=1 00:34:32.102 --rc genhtml_legend=1 00:34:32.102 --rc geninfo_all_blocks=1 00:34:32.102 --rc geninfo_unexecuted_blocks=1 00:34:32.102 00:34:32.102 ' 00:34:32.102 12:18:37 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:34:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.102 --rc genhtml_branch_coverage=1 00:34:32.102 --rc genhtml_function_coverage=1 00:34:32.102 --rc genhtml_legend=1 00:34:32.102 --rc geninfo_all_blocks=1 00:34:32.102 --rc geninfo_unexecuted_blocks=1 00:34:32.102 00:34:32.102 ' 00:34:32.102 12:18:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:32.102 12:18:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:32.102 12:18:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.102 12:18:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.102 12:18:37 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:32.102 12:18:37 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:32.102 12:18:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:32.102 12:18:37 -- paths/export.sh@5 -- $ export PATH 00:34:32.102 12:18:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:32.102 12:18:37 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:32.102 12:18:37 -- common/autobuild_common.sh@440 -- $ date +%s 00:34:32.102 12:18:37 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732882717.XXXXXX 00:34:32.102 12:18:37 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732882717.P7aY0J 00:34:32.102 12:18:37 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:34:32.102 12:18:37 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:34:32.102 12:18:37 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:34:32.102 12:18:37 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:34:32.102 12:18:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:32.102 12:18:37 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:32.102 12:18:37 -- common/autobuild_common.sh@456 -- $ get_config_params 00:34:32.102 12:18:37 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:34:32.102 12:18:37 -- common/autotest_common.sh@10 -- $ set +x 00:34:32.102 12:18:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:34:32.102 12:18:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:32.102 12:18:37 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:32.102 12:18:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:32.102 12:18:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:32.102 12:18:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:32.102 12:18:37 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:34:32.102 12:18:37 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:34:32.102 12:18:37 -- common/autotest_common.sh@10 -- $ set +x 00:34:32.102 12:18:37 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:34:32.102 12:18:37 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:34:32.102 12:18:37 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:34:32.102 12:18:37 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:34:32.102 12:18:37 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:32.102 12:18:37 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:32.102 12:18:37 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:34:32.102 12:18:37 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:34:32.102 12:18:37 -- spdk/autopackage.sh@40 -- $ get_config_params 00:34:32.102 12:18:37 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:34:32.102 12:18:37 -- common/autotest_common.sh@10 -- $ set +x 00:34:32.102 12:18:37 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:34:32.102 12:18:37 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:34:32.102 12:18:37 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:34:32.360 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:34:32.360 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:34:32.360 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:34:32.360 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:32.618 Using 'verbs' RDMA provider 00:34:43.175 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:34:55.379 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:34:55.379 Creating mk/config.mk...done. 00:34:55.379 Creating mk/cc.flags.mk...done. 00:34:55.379 Type 'make' to build. 00:34:55.379 12:18:59 -- spdk/autopackage.sh@43 -- $ make -j10 00:34:55.379 make[1]: Nothing to be done for 'all'. 00:34:55.379 CC lib/ut_mock/mock.o 00:34:55.379 CC lib/log/log_flags.o 00:34:55.379 CC lib/log/log.o 00:34:55.379 CC lib/log/log_deprecated.o 00:34:55.379 CC lib/ut/ut.o 00:34:55.379 LIB libspdk_ut_mock.a 00:34:55.379 LIB libspdk_log.a 00:34:55.379 LIB libspdk_ut.a 00:34:55.379 CXX lib/trace_parser/trace.o 00:34:55.379 CC lib/util/base64.o 00:34:55.379 CC lib/dma/dma.o 00:34:55.379 CC lib/util/bit_array.o 00:34:55.379 CC lib/util/cpuset.o 00:34:55.379 CC lib/util/crc16.o 00:34:55.379 CC lib/util/crc32.o 00:34:55.379 CC lib/ioat/ioat.o 00:34:55.379 CC lib/util/crc32c.o 00:34:55.379 CC lib/vfio_user/host/vfio_user_pci.o 00:34:55.379 CC lib/util/crc32_ieee.o 00:34:55.379 CC lib/util/crc64.o 00:34:55.379 CC lib/util/dif.o 00:34:55.379 CC lib/util/fd.o 00:34:55.379 LIB libspdk_dma.a 00:34:55.379 CC lib/util/file.o 00:34:55.379 CC lib/vfio_user/host/vfio_user.o 00:34:55.379 CC lib/util/hexlify.o 00:34:55.379 LIB libspdk_ioat.a 00:34:55.379 CC lib/util/iov.o 00:34:55.379 CC lib/util/math.o 00:34:55.637 CC lib/util/pipe.o 00:34:55.637 CC lib/util/strerror_tls.o 00:34:55.637 CC lib/util/string.o 00:34:55.637 CC lib/util/uuid.o 00:34:55.637 CC lib/util/fd_group.o 00:34:55.637 CC lib/util/xor.o 00:34:55.637 LIB libspdk_vfio_user.a 00:34:55.637 CC lib/util/zipf.o 00:34:55.901 LIB libspdk_util.a 00:34:55.901 CC lib/json/json_parse.o 00:34:55.901 CC lib/conf/conf.o 00:34:55.901 CC lib/json/json_util.o 00:34:55.901 CC lib/json/json_write.o 00:34:55.901 CC lib/idxd/idxd.o 00:34:55.901 CC lib/idxd/idxd_user.o 00:34:55.901 LIB libspdk_trace_parser.a 00:34:55.901 CC lib/vmd/vmd.o 00:34:55.901 CC lib/env_dpdk/env.o 00:34:55.901 CC lib/rdma/common.o 00:34:55.901 CC lib/env_dpdk/memory.o 00:34:56.167 CC lib/env_dpdk/pci.o 00:34:56.167 CC lib/rdma/rdma_verbs.o 00:34:56.167 LIB libspdk_conf.a 00:34:56.167 LIB libspdk_json.a 00:34:56.167 CC lib/env_dpdk/init.o 00:34:56.167 CC lib/env_dpdk/threads.o 00:34:56.167 CC lib/env_dpdk/pci_ioat.o 00:34:56.167 CC lib/env_dpdk/pci_virtio.o 00:34:56.167 LIB libspdk_idxd.a 00:34:56.167 CC lib/env_dpdk/pci_vmd.o 00:34:56.167 CC lib/env_dpdk/pci_idxd.o 00:34:56.167 CC lib/vmd/led.o 00:34:56.167 LIB libspdk_rdma.a 00:34:56.167 CC lib/env_dpdk/pci_event.o 00:34:56.167 CC lib/env_dpdk/sigbus_handler.o 00:34:56.167 CC lib/env_dpdk/pci_dpdk.o 00:34:56.425 CC lib/jsonrpc/jsonrpc_server.o 00:34:56.425 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:34:56.425 CC lib/jsonrpc/jsonrpc_client.o 00:34:56.425 CC lib/env_dpdk/pci_dpdk_2207.o 00:34:56.425 LIB libspdk_vmd.a 00:34:56.425 CC lib/env_dpdk/pci_dpdk_2211.o 00:34:56.425 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:34:56.425 LIB libspdk_jsonrpc.a 00:34:56.683 CC lib/rpc/rpc.o 00:34:56.942 LIB libspdk_rpc.a 00:34:56.942 LIB libspdk_env_dpdk.a 00:34:56.942 CC lib/sock/sock.o 00:34:56.942 CC lib/sock/sock_rpc.o 00:34:56.942 CC lib/notify/notify.o 00:34:56.942 CC lib/notify/notify_rpc.o 00:34:56.942 CC lib/trace/trace.o 00:34:56.942 CC lib/trace/trace_flags.o 00:34:56.942 CC lib/trace/trace_rpc.o 00:34:56.942 LIB libspdk_notify.a 00:34:57.201 LIB libspdk_trace.a 00:34:57.201 LIB libspdk_sock.a 00:34:57.201 CC lib/thread/thread.o 00:34:57.201 CC lib/thread/iobuf.o 00:34:57.201 CC lib/nvme/nvme_ctrlr_cmd.o 00:34:57.201 CC lib/nvme/nvme_ctrlr.o 00:34:57.201 CC lib/nvme/nvme_fabric.o 00:34:57.201 CC lib/nvme/nvme_ns_cmd.o 00:34:57.201 CC lib/nvme/nvme_ns.o 00:34:57.201 CC lib/nvme/nvme_pcie.o 00:34:57.201 CC lib/nvme/nvme_pcie_common.o 00:34:57.201 CC lib/nvme/nvme_qpair.o 00:34:57.460 CC lib/nvme/nvme.o 00:34:57.719 LIB libspdk_thread.a 00:34:57.719 CC lib/nvme/nvme_quirks.o 00:34:57.719 CC lib/nvme/nvme_transport.o 00:34:57.719 CC lib/nvme/nvme_discovery.o 00:34:57.719 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:34:57.978 CC lib/accel/accel.o 00:34:57.978 CC lib/blob/blobstore.o 00:34:57.978 CC lib/accel/accel_rpc.o 00:34:57.978 CC lib/accel/accel_sw.o 00:34:58.237 CC lib/init/json_config.o 00:34:58.237 CC lib/init/subsystem.o 00:34:58.237 CC lib/virtio/virtio.o 00:34:58.237 CC lib/virtio/virtio_vhost_user.o 00:34:58.237 CC lib/virtio/virtio_vfio_user.o 00:34:58.237 CC lib/virtio/virtio_pci.o 00:34:58.237 CC lib/init/subsystem_rpc.o 00:34:58.237 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:34:58.237 CC lib/blob/request.o 00:34:58.237 CC lib/nvme/nvme_tcp.o 00:34:58.496 LIB libspdk_accel.a 00:34:58.496 CC lib/nvme/nvme_opal.o 00:34:58.496 CC lib/init/rpc.o 00:34:58.496 CC lib/blob/zeroes.o 00:34:58.496 CC lib/blob/blob_bs_dev.o 00:34:58.496 LIB libspdk_virtio.a 00:34:58.496 CC lib/bdev/bdev.o 00:34:58.496 CC lib/nvme/nvme_io_msg.o 00:34:58.496 CC lib/nvme/nvme_poll_group.o 00:34:58.496 LIB libspdk_init.a 00:34:58.496 CC lib/nvme/nvme_zns.o 00:34:58.496 CC lib/nvme/nvme_cuse.o 00:34:58.496 CC lib/nvme/nvme_vfio_user.o 00:34:58.755 CC lib/event/app.o 00:34:58.755 CC lib/event/reactor.o 00:34:59.014 CC lib/event/log_rpc.o 00:34:59.014 CC lib/nvme/nvme_rdma.o 00:34:59.014 CC lib/event/app_rpc.o 00:34:59.014 CC lib/event/scheduler_static.o 00:34:59.014 CC lib/bdev/bdev_rpc.o 00:34:59.014 CC lib/bdev/bdev_zone.o 00:34:59.014 LIB libspdk_blob.a 00:34:59.014 CC lib/bdev/part.o 00:34:59.272 CC lib/bdev/scsi_nvme.o 00:34:59.272 LIB libspdk_event.a 00:34:59.272 CC lib/blobfs/blobfs.o 00:34:59.272 CC lib/blobfs/tree.o 00:34:59.272 CC lib/lvol/lvol.o 00:34:59.530 LIB libspdk_blobfs.a 00:34:59.789 LIB libspdk_bdev.a 00:34:59.789 LIB libspdk_lvol.a 00:34:59.789 LIB libspdk_nvme.a 00:34:59.789 CC lib/nbd/nbd.o 00:34:59.789 CC lib/nbd/nbd_rpc.o 00:34:59.789 CC lib/scsi/lun.o 00:34:59.789 CC lib/scsi/dev.o 00:34:59.789 CC lib/scsi/port.o 00:34:59.789 CC lib/scsi/scsi_bdev.o 00:34:59.789 CC lib/scsi/scsi.o 00:34:59.789 CC lib/scsi/scsi_pr.o 00:34:59.789 CC lib/scsi/scsi_rpc.o 00:34:59.789 CC lib/ftl/ftl_core.o 00:35:00.046 CC lib/scsi/task.o 00:35:00.046 CC lib/ftl/ftl_init.o 00:35:00.046 CC lib/ftl/ftl_layout.o 00:35:00.046 CC lib/ftl/ftl_debug.o 00:35:00.046 CC lib/ftl/ftl_io.o 00:35:00.046 CC lib/nvmf/ctrlr.o 00:35:00.046 CC lib/ftl/ftl_sb.o 00:35:00.046 LIB libspdk_nbd.a 00:35:00.046 CC lib/ftl/ftl_l2p.o 00:35:00.046 CC lib/ftl/ftl_l2p_flat.o 00:35:00.046 LIB libspdk_scsi.a 00:35:00.046 CC lib/ftl/ftl_nv_cache.o 00:35:00.046 CC lib/ftl/ftl_band.o 00:35:00.046 CC lib/ftl/ftl_band_ops.o 00:35:00.304 CC lib/ftl/ftl_writer.o 00:35:00.304 CC lib/nvmf/ctrlr_discovery.o 00:35:00.304 CC lib/iscsi/conn.o 00:35:00.304 CC lib/iscsi/init_grp.o 00:35:00.304 CC lib/vhost/vhost.o 00:35:00.304 CC lib/ftl/ftl_rq.o 00:35:00.304 CC lib/vhost/vhost_rpc.o 00:35:00.304 CC lib/vhost/vhost_scsi.o 00:35:00.304 CC lib/vhost/vhost_blk.o 00:35:00.304 CC lib/vhost/rte_vhost_user.o 00:35:00.304 CC lib/iscsi/iscsi.o 00:35:00.563 CC lib/iscsi/md5.o 00:35:00.563 CC lib/iscsi/param.o 00:35:00.563 CC lib/iscsi/portal_grp.o 00:35:00.563 CC lib/nvmf/ctrlr_bdev.o 00:35:00.563 CC lib/ftl/ftl_reloc.o 00:35:00.821 CC lib/iscsi/tgt_node.o 00:35:00.821 CC lib/ftl/ftl_l2p_cache.o 00:35:00.821 CC lib/ftl/ftl_p2l.o 00:35:00.821 CC lib/nvmf/subsystem.o 00:35:00.821 CC lib/nvmf/nvmf.o 00:35:00.821 CC lib/ftl/mngt/ftl_mngt.o 00:35:01.079 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:01.079 CC lib/nvmf/nvmf_rpc.o 00:35:01.079 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:01.079 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:01.079 CC lib/iscsi/iscsi_subsystem.o 00:35:01.079 CC lib/iscsi/iscsi_rpc.o 00:35:01.079 CC lib/iscsi/task.o 00:35:01.079 LIB libspdk_vhost.a 00:35:01.079 CC lib/nvmf/transport.o 00:35:01.079 CC lib/nvmf/tcp.o 00:35:01.079 CC lib/nvmf/rdma.o 00:35:01.079 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:01.079 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:01.338 LIB libspdk_iscsi.a 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:01.338 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:01.338 CC lib/ftl/utils/ftl_conf.o 00:35:01.338 CC lib/ftl/utils/ftl_md.o 00:35:01.338 CC lib/ftl/utils/ftl_mempool.o 00:35:01.596 CC lib/ftl/utils/ftl_bitmap.o 00:35:01.596 CC lib/ftl/utils/ftl_property.o 00:35:01.596 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:01.596 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:01.596 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:01.596 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:01.596 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:01.596 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:01.596 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:01.596 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:01.854 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:01.854 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:01.854 CC lib/ftl/base/ftl_base_dev.o 00:35:01.854 CC lib/ftl/base/ftl_base_bdev.o 00:35:01.854 LIB libspdk_nvmf.a 00:35:01.854 LIB libspdk_ftl.a 00:35:02.113 CC module/env_dpdk/env_dpdk_rpc.o 00:35:02.113 CC module/accel/error/accel_error.o 00:35:02.113 CC module/accel/dsa/accel_dsa.o 00:35:02.113 CC module/accel/iaa/accel_iaa.o 00:35:02.113 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:02.113 CC module/blob/bdev/blob_bdev.o 00:35:02.113 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:02.113 CC module/accel/ioat/accel_ioat.o 00:35:02.113 CC module/sock/posix/posix.o 00:35:02.113 CC module/scheduler/gscheduler/gscheduler.o 00:35:02.371 LIB libspdk_env_dpdk_rpc.a 00:35:02.371 CC module/accel/ioat/accel_ioat_rpc.o 00:35:02.371 LIB libspdk_scheduler_dpdk_governor.a 00:35:02.371 CC module/accel/error/accel_error_rpc.o 00:35:02.371 LIB libspdk_scheduler_dynamic.a 00:35:02.371 LIB libspdk_scheduler_gscheduler.a 00:35:02.371 CC module/accel/iaa/accel_iaa_rpc.o 00:35:02.371 CC module/accel/dsa/accel_dsa_rpc.o 00:35:02.372 LIB libspdk_blob_bdev.a 00:35:02.372 LIB libspdk_accel_ioat.a 00:35:02.372 LIB libspdk_accel_error.a 00:35:02.372 LIB libspdk_accel_iaa.a 00:35:02.372 LIB libspdk_accel_dsa.a 00:35:02.372 CC module/bdev/lvol/vbdev_lvol.o 00:35:02.372 CC module/bdev/delay/vbdev_delay.o 00:35:02.372 CC module/blobfs/bdev/blobfs_bdev.o 00:35:02.372 CC module/bdev/error/vbdev_error.o 00:35:02.372 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:02.372 CC module/bdev/malloc/bdev_malloc.o 00:35:02.372 CC module/bdev/gpt/gpt.o 00:35:02.630 CC module/bdev/null/bdev_null.o 00:35:02.630 CC module/bdev/nvme/bdev_nvme.o 00:35:02.630 LIB libspdk_sock_posix.a 00:35:02.630 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:02.630 CC module/bdev/nvme/nvme_rpc.o 00:35:02.630 LIB libspdk_blobfs_bdev.a 00:35:02.630 CC module/bdev/gpt/vbdev_gpt.o 00:35:02.630 CC module/bdev/nvme/bdev_mdns_client.o 00:35:02.630 CC module/bdev/error/vbdev_error_rpc.o 00:35:02.630 CC module/bdev/null/bdev_null_rpc.o 00:35:02.630 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:02.630 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:02.889 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:02.889 CC module/bdev/nvme/vbdev_opal.o 00:35:02.889 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:02.889 LIB libspdk_bdev_error.a 00:35:02.889 LIB libspdk_bdev_null.a 00:35:02.889 LIB libspdk_bdev_malloc.a 00:35:02.889 LIB libspdk_bdev_gpt.a 00:35:02.889 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:02.889 LIB libspdk_bdev_delay.a 00:35:02.889 CC module/bdev/passthru/vbdev_passthru.o 00:35:02.889 CC module/bdev/raid/bdev_raid.o 00:35:02.889 CC module/bdev/split/vbdev_split.o 00:35:02.889 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:02.889 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:02.889 CC module/bdev/split/vbdev_split_rpc.o 00:35:02.889 LIB libspdk_bdev_lvol.a 00:35:03.147 CC module/bdev/raid/bdev_raid_rpc.o 00:35:03.147 CC module/bdev/aio/bdev_aio.o 00:35:03.147 CC module/bdev/ftl/bdev_ftl.o 00:35:03.147 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:03.147 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:03.147 LIB libspdk_bdev_split.a 00:35:03.147 LIB libspdk_bdev_zone_block.a 00:35:03.147 CC module/bdev/raid/bdev_raid_sb.o 00:35:03.147 CC module/bdev/iscsi/bdev_iscsi.o 00:35:03.147 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:03.147 CC module/bdev/raid/raid0.o 00:35:03.147 LIB libspdk_bdev_passthru.a 00:35:03.147 CC module/bdev/raid/raid1.o 00:35:03.405 CC module/bdev/raid/concat.o 00:35:03.405 CC module/bdev/aio/bdev_aio_rpc.o 00:35:03.405 LIB libspdk_bdev_ftl.a 00:35:03.405 CC module/bdev/raid/raid5f.o 00:35:03.405 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:03.405 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:03.405 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:03.405 LIB libspdk_bdev_aio.a 00:35:03.405 LIB libspdk_bdev_nvme.a 00:35:03.405 LIB libspdk_bdev_iscsi.a 00:35:03.664 LIB libspdk_bdev_virtio.a 00:35:03.664 LIB libspdk_bdev_raid.a 00:35:03.923 CC module/event/subsystems/scheduler/scheduler.o 00:35:03.923 CC module/event/subsystems/vmd/vmd.o 00:35:03.923 CC module/event/subsystems/sock/sock.o 00:35:03.923 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:03.923 CC module/event/subsystems/iobuf/iobuf.o 00:35:03.923 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:03.923 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:03.923 LIB libspdk_event_sock.a 00:35:03.923 LIB libspdk_event_scheduler.a 00:35:03.923 LIB libspdk_event_vhost_blk.a 00:35:03.923 LIB libspdk_event_vmd.a 00:35:03.923 LIB libspdk_event_iobuf.a 00:35:04.181 CC module/event/subsystems/accel/accel.o 00:35:04.182 LIB libspdk_event_accel.a 00:35:04.182 CC module/event/subsystems/bdev/bdev.o 00:35:04.439 LIB libspdk_event_bdev.a 00:35:04.440 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:04.440 CC module/event/subsystems/nbd/nbd.o 00:35:04.440 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:04.440 CC module/event/subsystems/scsi/scsi.o 00:35:04.698 LIB libspdk_event_nbd.a 00:35:04.698 LIB libspdk_event_scsi.a 00:35:04.698 LIB libspdk_event_nvmf.a 00:35:04.698 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:04.698 CC module/event/subsystems/iscsi/iscsi.o 00:35:04.956 LIB libspdk_event_iscsi.a 00:35:04.956 LIB libspdk_event_vhost_scsi.a 00:35:04.956 CXX app/trace/trace.o 00:35:04.956 CC app/trace_record/trace_record.o 00:35:05.214 CC examples/accel/perf/accel_perf.o 00:35:05.214 CC examples/ioat/perf/perf.o 00:35:05.214 CC app/iscsi_tgt/iscsi_tgt.o 00:35:05.214 CC app/nvmf_tgt/nvmf_main.o 00:35:05.214 CC examples/blob/hello_world/hello_blob.o 00:35:05.214 CC app/spdk_tgt/spdk_tgt.o 00:35:05.215 CC examples/bdev/hello_world/hello_bdev.o 00:35:05.215 CC test/accel/dif/dif.o 00:35:05.215 LINK spdk_trace_record 00:35:05.473 LINK ioat_perf 00:35:05.473 LINK spdk_tgt 00:35:05.473 LINK iscsi_tgt 00:35:05.473 LINK nvmf_tgt 00:35:05.473 LINK hello_blob 00:35:05.473 LINK hello_bdev 00:35:05.473 LINK accel_perf 00:35:05.473 LINK spdk_trace 00:35:05.473 LINK dif 00:35:08.005 CC examples/blob/cli/blobcli.o 00:35:08.572 LINK blobcli 00:35:09.974 CC examples/ioat/verify/verify.o 00:35:10.232 LINK verify 00:35:11.609 CC app/spdk_lspci/spdk_lspci.o 00:35:12.177 LINK spdk_lspci 00:35:24.379 CC app/spdk_nvme_perf/perf.o 00:35:28.567 CC examples/nvme/hello_world/hello_world.o 00:35:28.567 LINK spdk_nvme_perf 00:35:29.133 LINK hello_world 00:35:55.702 CC examples/nvme/reconnect/reconnect.o 00:35:56.270 LINK reconnect 00:36:22.834 CC test/app/bdev_svc/bdev_svc.o 00:36:22.834 LINK bdev_svc 00:36:23.092 CC test/bdev/bdevio/bdevio.o 00:36:25.626 LINK bdevio 00:36:43.706 CC examples/nvme/nvme_manage/nvme_manage.o 00:36:46.237 LINK nvme_manage 00:37:32.957 CC examples/nvme/arbitration/arbitration.o 00:37:32.957 CC examples/bdev/bdevperf/bdevperf.o 00:37:32.957 CC examples/nvme/hotplug/hotplug.o 00:37:32.957 LINK arbitration 00:37:32.957 LINK hotplug 00:37:32.957 LINK bdevperf 00:37:32.957 TEST_HEADER include/spdk/config.h 00:37:32.957 CXX test/cpp_headers/accel.o 00:37:32.957 CC test/blobfs/mkfs/mkfs.o 00:37:32.957 CXX test/cpp_headers/accel_module.o 00:37:32.957 LINK mkfs 00:37:33.892 CXX test/cpp_headers/assert.o 00:37:34.829 CXX test/cpp_headers/barrier.o 00:37:35.766 CXX test/cpp_headers/base64.o 00:37:36.702 CXX test/cpp_headers/bdev.o 00:37:37.638 CXX test/cpp_headers/bdev_module.o 00:37:38.575 CXX test/cpp_headers/bdev_zone.o 00:37:39.512 CXX test/cpp_headers/bit_array.o 00:37:39.512 CC test/dma/test_dma/test_dma.o 00:37:40.462 CXX test/cpp_headers/bit_pool.o 00:37:41.431 LINK test_dma 00:37:41.431 CXX test/cpp_headers/blob.o 00:37:42.367 CXX test/cpp_headers/blob_bdev.o 00:37:42.934 CXX test/cpp_headers/blobfs.o 00:37:44.313 CXX test/cpp_headers/blobfs_bdev.o 00:37:45.249 CXX test/cpp_headers/conf.o 00:37:45.816 CXX test/cpp_headers/config.o 00:37:46.383 CXX test/cpp_headers/cpuset.o 00:37:46.383 CC examples/nvme/cmb_copy/cmb_copy.o 00:37:47.318 CXX test/cpp_headers/crc16.o 00:37:47.577 LINK cmb_copy 00:37:48.513 CXX test/cpp_headers/crc32.o 00:37:49.448 CXX test/cpp_headers/crc64.o 00:37:50.386 CXX test/cpp_headers/dif.o 00:37:50.953 CXX test/cpp_headers/dma.o 00:37:51.889 CXX test/cpp_headers/endian.o 00:37:52.457 CXX test/cpp_headers/env.o 00:37:53.394 CXX test/cpp_headers/env_dpdk.o 00:37:54.329 CXX test/cpp_headers/event.o 00:37:55.263 CXX test/cpp_headers/fd.o 00:37:55.847 CXX test/cpp_headers/fd_group.o 00:37:56.412 CC app/spdk_nvme_identify/identify.o 00:37:56.670 CXX test/cpp_headers/file.o 00:37:57.606 CXX test/cpp_headers/ftl.o 00:37:58.541 CXX test/cpp_headers/gpt_spec.o 00:37:59.109 LINK spdk_nvme_identify 00:37:59.367 CXX test/cpp_headers/hexlify.o 00:37:59.935 CC test/env/mem_callbacks/mem_callbacks.o 00:38:00.194 CXX test/cpp_headers/histogram_data.o 00:38:01.132 LINK mem_callbacks 00:38:01.390 CXX test/cpp_headers/idxd.o 00:38:01.959 CXX test/cpp_headers/idxd_spec.o 00:38:02.527 CC app/spdk_nvme_discover/discovery_aer.o 00:38:02.527 CXX test/cpp_headers/init.o 00:38:03.093 CXX test/cpp_headers/ioat.o 00:38:04.038 LINK spdk_nvme_discover 00:38:04.038 CXX test/cpp_headers/ioat_spec.o 00:38:04.973 CXX test/cpp_headers/iscsi_spec.o 00:38:05.909 CC test/env/vtophys/vtophys.o 00:38:06.167 CXX test/cpp_headers/json.o 00:38:06.739 LINK vtophys 00:38:07.305 CXX test/cpp_headers/jsonrpc.o 00:38:08.677 CXX test/cpp_headers/likely.o 00:38:09.611 CXX test/cpp_headers/log.o 00:38:10.984 CXX test/cpp_headers/lvol.o 00:38:12.360 CXX test/cpp_headers/memory.o 00:38:13.363 CXX test/cpp_headers/mmio.o 00:38:14.739 CXX test/cpp_headers/nbd.o 00:38:14.739 CXX test/cpp_headers/notify.o 00:38:15.675 CXX test/cpp_headers/nvme.o 00:38:17.053 CC examples/nvme/abort/abort.o 00:38:17.053 CXX test/cpp_headers/nvme_intel.o 00:38:17.989 CXX test/cpp_headers/nvme_ocssd.o 00:38:18.926 LINK abort 00:38:19.185 CXX test/cpp_headers/nvme_ocssd_spec.o 00:38:20.564 CXX test/cpp_headers/nvme_spec.o 00:38:21.497 CXX test/cpp_headers/nvme_zns.o 00:38:21.755 CXX test/cpp_headers/nvmf.o 00:38:22.012 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:38:22.947 CXX test/cpp_headers/nvmf_cmd.o 00:38:22.947 LINK env_dpdk_post_init 00:38:23.883 CXX test/cpp_headers/nvmf_fc_spec.o 00:38:25.259 CXX test/cpp_headers/nvmf_spec.o 00:38:26.222 CXX test/cpp_headers/nvmf_transport.o 00:38:27.158 CXX test/cpp_headers/opal.o 00:38:28.538 CXX test/cpp_headers/opal_spec.o 00:38:29.475 CXX test/cpp_headers/pci_ids.o 00:38:29.746 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:38:30.314 CXX test/cpp_headers/pipe.o 00:38:30.882 CC app/spdk_top/spdk_top.o 00:38:31.141 CXX test/cpp_headers/queue.o 00:38:31.141 LINK nvme_fuzz 00:38:31.141 CXX test/cpp_headers/reduce.o 00:38:32.075 CXX test/cpp_headers/rpc.o 00:38:33.013 CXX test/cpp_headers/scheduler.o 00:38:33.582 LINK spdk_top 00:38:33.841 CXX test/cpp_headers/scsi.o 00:38:34.778 CXX test/cpp_headers/scsi_spec.o 00:38:35.716 CXX test/cpp_headers/sock.o 00:38:35.976 CXX test/cpp_headers/stdinc.o 00:38:36.914 CXX test/cpp_headers/string.o 00:38:37.483 CC app/vhost/vhost.o 00:38:38.052 CXX test/cpp_headers/thread.o 00:38:38.621 LINK vhost 00:38:38.880 CXX test/cpp_headers/trace.o 00:38:39.449 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:38:40.014 CXX test/cpp_headers/trace_parser.o 00:38:40.273 LINK pmr_persistence 00:38:40.842 CXX test/cpp_headers/tree.o 00:38:41.101 CXX test/cpp_headers/ublk.o 00:38:41.670 CXX test/cpp_headers/util.o 00:38:42.607 CXX test/cpp_headers/uuid.o 00:38:43.174 CXX test/cpp_headers/version.o 00:38:43.433 CXX test/cpp_headers/vfio_user_pci.o 00:38:44.001 CXX test/cpp_headers/vfio_user_spec.o 00:38:44.939 CXX test/cpp_headers/vhost.o 00:38:45.508 CXX test/cpp_headers/vmd.o 00:38:46.076 CXX test/cpp_headers/xor.o 00:38:47.013 CXX test/cpp_headers/zipf.o 00:38:47.013 CC app/spdk_dd/spdk_dd.o 00:38:47.950 CC test/env/memory/memory_ut.o 00:38:47.950 CC test/env/pci/pci_ut.o 00:38:48.209 CC test/app/histogram_perf/histogram_perf.o 00:38:48.209 LINK spdk_dd 00:38:48.778 LINK histogram_perf 00:38:49.346 LINK pci_ut 00:38:49.606 LINK memory_ut 00:38:57.801 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:38:57.801 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:38:58.369 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:38:59.753 LINK vhost_fuzz 00:39:01.130 LINK iscsi_fuzz 00:39:01.390 CC app/fio/nvme/fio_plugin.o 00:39:01.649 CC test/event/event_perf/event_perf.o 00:39:02.217 LINK event_perf 00:39:03.155 LINK spdk_nvme 00:39:06.445 CC test/event/reactor/reactor.o 00:39:07.013 LINK reactor 00:39:09.573 CC examples/sock/hello_world/hello_sock.o 00:39:10.949 LINK hello_sock 00:39:15.139 CC examples/vmd/lsvmd/lsvmd.o 00:39:15.397 LINK lsvmd 00:39:17.302 CC examples/vmd/led/led.o 00:39:18.240 LINK led 00:39:18.499 CC app/fio/bdev/fio_plugin.o 00:39:20.403 LINK spdk_bdev 00:39:23.692 CC test/lvol/esnap/esnap.o 00:39:24.629 CC test/event/reactor_perf/reactor_perf.o 00:39:25.197 LINK reactor_perf 00:39:29.390 CC test/event/app_repeat/app_repeat.o 00:39:29.390 CC test/app/jsoncat/jsoncat.o 00:39:29.649 LINK app_repeat 00:39:30.216 LINK jsoncat 00:39:36.786 LINK esnap 00:39:41.084 CC test/app/stub/stub.o 00:39:41.084 CC examples/nvmf/nvmf/nvmf.o 00:39:41.343 LINK stub 00:39:42.281 LINK nvmf 00:39:43.216 CC examples/util/zipf/zipf.o 00:39:43.216 CC examples/thread/thread/thread_ex.o 00:39:43.785 LINK zipf 00:39:44.044 LINK thread 00:39:44.980 CC test/event/scheduler/scheduler.o 00:39:45.239 LINK scheduler 00:39:50.508 CC examples/idxd/perf/perf.o 00:39:50.768 LINK idxd_perf 00:40:02.979 CC examples/interrupt_tgt/interrupt_tgt.o 00:40:02.979 LINK interrupt_tgt 00:40:05.514 CC test/nvme/aer/aer.o 00:40:07.419 LINK aer 00:40:07.987 CC test/nvme/reset/reset.o 00:40:09.889 LINK reset 00:40:22.100 CC test/rpc_client/rpc_client_test.o 00:40:22.667 LINK rpc_client_test 00:40:37.544 CC test/thread/poller_perf/poller_perf.o 00:40:38.113 LINK poller_perf 00:40:41.401 CC test/thread/lock/spdk_lock.o 00:40:45.595 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:40:45.595 CC test/nvme/sgl/sgl.o 00:40:46.534 LINK histogram_ut 00:40:46.534 LINK spdk_lock 00:40:47.142 LINK sgl 00:40:48.533 CC test/nvme/e2edp/nvme_dp.o 00:40:49.470 LINK nvme_dp 00:40:53.662 CC test/unit/lib/accel/accel.c/accel_ut.o 00:40:58.933 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:41:03.125 LINK accel_ut 00:41:11.249 CC test/unit/lib/bdev/part.c/part_ut.o 00:41:16.522 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:41:17.090 LINK scsi_nvme_ut 00:41:17.090 LINK bdev_ut 00:41:18.468 LINK part_ut 00:41:18.725 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:41:18.725 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:41:19.291 LINK gpt_ut 00:41:19.291 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:41:20.673 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:41:22.048 LINK vbdev_lvol_ut 00:41:22.307 CC test/nvme/overhead/overhead.o 00:41:23.242 LINK overhead 00:41:26.530 CC test/nvme/err_injection/err_injection.o 00:41:26.530 LINK bdev_ut 00:41:26.530 LINK bdev_raid_ut 00:41:27.469 LINK err_injection 00:41:28.038 CC test/nvme/startup/startup.o 00:41:29.417 LINK startup 00:41:33.668 CC test/nvme/reserve/reserve.o 00:41:34.236 LINK reserve 00:41:40.809 CC test/nvme/simple_copy/simple_copy.o 00:41:40.809 LINK simple_copy 00:41:41.376 CC test/nvme/connect_stress/connect_stress.o 00:41:41.944 CC test/nvme/boot_partition/boot_partition.o 00:41:42.512 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:41:42.512 LINK connect_stress 00:41:42.512 LINK boot_partition 00:41:43.450 LINK bdev_raid_sb_ut 00:41:50.011 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:41:50.011 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:41:50.578 LINK bdev_zone_ut 00:41:51.514 LINK concat_ut 00:41:53.420 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:41:55.326 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:41:55.326 LINK vbdev_zone_block_ut 00:41:55.895 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:41:56.154 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:41:56.723 CC test/unit/lib/dma/dma.c/dma_ut.o 00:41:56.723 LINK tree_ut 00:41:56.982 LINK raid1_ut 00:41:57.549 LINK blob_bdev_ut 00:41:57.808 LINK dma_ut 00:41:58.746 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:41:59.684 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:42:00.251 LINK blobfs_async_ut 00:42:00.251 CC test/unit/lib/event/app.c/app_ut.o 00:42:00.511 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:42:00.511 LINK raid5f_ut 00:42:00.770 LINK app_ut 00:42:00.770 CC test/unit/lib/blob/blob.c/blob_ut.o 00:42:00.770 CC test/nvme/compliance/nvme_compliance.o 00:42:01.338 LINK reactor_ut 00:42:01.338 CC test/nvme/fused_ordering/fused_ordering.o 00:42:01.598 LINK nvme_compliance 00:42:01.598 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:42:01.857 LINK fused_ordering 00:42:02.425 CC test/nvme/doorbell_aers/doorbell_aers.o 00:42:02.684 LINK doorbell_aers 00:42:02.943 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:42:03.202 LINK blobfs_sync_ut 00:42:03.771 LINK ioat_ut 00:42:05.148 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:42:06.527 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:42:06.786 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:42:07.046 LINK conn_ut 00:42:07.984 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:42:08.923 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:42:08.923 LINK json_util_ut 00:42:10.827 LINK blob_ut 00:42:11.095 LINK json_parse_ut 00:42:11.371 LINK json_write_ut 00:42:13.909 LINK bdev_nvme_ut 00:42:17.199 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:42:17.199 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:42:17.199 LINK blobfs_bdev_ut 00:42:17.199 LINK init_grp_ut 00:42:20.485 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:42:20.485 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:42:21.052 CC test/unit/lib/iscsi/param.c/param_ut.o 00:42:21.620 LINK jsonrpc_server_ut 00:42:21.620 CC test/unit/lib/log/log.c/log_ut.o 00:42:21.880 LINK param_ut 00:42:22.139 LINK log_ut 00:42:22.398 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:42:22.656 LINK iscsi_ut 00:42:22.915 CC test/nvme/fdp/fdp.o 00:42:23.172 CC test/unit/lib/notify/notify.c/notify_ut.o 00:42:23.172 LINK notify_ut 00:42:23.431 LINK fdp 00:42:23.431 CC test/nvme/cuse/cuse.o 00:42:23.999 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:42:23.999 LINK lvol_ut 00:42:24.258 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:42:24.518 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:42:24.518 LINK cuse 00:42:25.456 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:42:26.395 LINK nvme_ctrlr_cmd_ut 00:42:26.655 LINK nvme_ut 00:42:26.915 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:42:28.822 LINK nvme_ctrlr_ut 00:42:28.822 LINK tcp_ut 00:42:29.081 LINK ctrlr_ut 00:42:29.081 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:42:29.340 CC test/unit/lib/sock/sock.c/sock_ut.o 00:42:29.340 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:42:30.716 LINK dev_ut 00:42:30.716 LINK portal_grp_ut 00:42:32.617 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:42:32.617 LINK sock_ut 00:42:33.991 LINK lun_ut 00:42:33.991 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:42:34.924 LINK scsi_ut 00:42:35.860 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:42:36.119 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:42:36.378 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:42:36.378 CC test/unit/lib/sock/posix.c/posix_ut.o 00:42:36.378 LINK nvme_ctrlr_ocssd_cmd_ut 00:42:36.378 LINK scsi_bdev_ut 00:42:36.378 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:42:36.636 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:42:36.637 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:42:36.637 LINK tgt_node_ut 00:42:36.895 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:42:36.895 LINK scsi_pr_ut 00:42:36.895 LINK posix_ut 00:42:37.153 LINK ctrlr_bdev_ut 00:42:37.412 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:42:37.412 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:42:37.412 LINK ctrlr_discovery_ut 00:42:37.670 LINK subsystem_ut 00:42:37.928 LINK nvme_ns_ut 00:42:38.187 LINK nvmf_ut 00:42:39.122 CC test/unit/lib/util/base64.c/base64_ut.o 00:42:39.122 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:42:39.122 CC test/unit/lib/thread/thread.c/thread_ut.o 00:42:39.380 LINK base64_ut 00:42:39.945 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:42:40.511 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:42:40.511 LINK thread_ut 00:42:40.511 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:42:40.511 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:42:40.511 LINK pci_event_ut 00:42:40.511 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:42:40.770 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:42:40.770 LINK bit_array_ut 00:42:41.029 LINK subsystem_ut 00:42:41.029 LINK transport_ut 00:42:41.029 LINK rdma_ut 00:42:41.029 LINK rpc_ut 00:42:41.597 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:42:41.597 LINK nvme_ns_cmd_ut 00:42:42.166 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:42:42.424 LINK idxd_user_ut 00:42:42.993 LINK idxd_ut 00:42:43.586 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:42:43.871 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:42:44.138 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:42:44.138 LINK cpuset_ut 00:42:44.397 LINK iobuf_ut 00:42:45.332 CC test/unit/lib/rdma/common.c/common_ut.o 00:42:45.332 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:42:45.591 LINK common_ut 00:42:45.850 LINK ftl_l2p_ut 00:42:45.850 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:42:45.850 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:42:46.108 LINK crc16_ut 00:42:46.108 LINK vhost_ut 00:42:46.108 LINK crc32_ieee_ut 00:42:46.367 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:42:46.626 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:42:46.885 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:42:46.885 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:42:46.885 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:42:47.144 LINK crc32c_ut 00:42:47.144 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:42:47.403 LINK nvme_ns_ocssd_cmd_ut 00:42:47.403 LINK ftl_band_ut 00:42:47.662 LINK nvme_poll_group_ut 00:42:47.662 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:42:47.921 LINK crc64_ut 00:42:47.921 LINK nvme_pcie_ut 00:42:47.921 CC test/unit/lib/util/dif.c/dif_ut.o 00:42:48.181 CC test/unit/lib/util/iov.c/iov_ut.o 00:42:48.181 LINK nvme_qpair_ut 00:42:48.439 LINK iov_ut 00:42:48.698 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:42:48.957 LINK dif_ut 00:42:49.215 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:42:49.215 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:42:49.215 LINK nvme_quirks_ut 00:42:49.215 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:42:49.781 LINK ftl_io_ut 00:42:50.039 LINK nvme_transport_ut 00:42:50.298 CC test/unit/lib/util/math.c/math_ut.o 00:42:50.298 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:42:50.298 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:42:50.298 LINK math_ut 00:42:50.298 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:42:50.298 LINK nvme_tcp_ut 00:42:50.557 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:42:51.124 LINK pipe_ut 00:42:51.124 LINK nvme_pcie_common_ut 00:42:51.383 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:42:51.383 LINK nvme_io_msg_ut 00:42:51.641 LINK nvme_fabric_ut 00:42:51.900 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:42:51.900 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:42:52.158 LINK nvme_opal_ut 00:42:52.416 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:42:52.675 LINK ftl_bitmap_ut 00:42:52.934 LINK nvme_cuse_ut 00:42:52.934 CC test/unit/lib/util/string.c/string_ut.o 00:42:52.934 CC test/unit/lib/util/xor.c/xor_ut.o 00:42:52.934 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:42:52.934 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:42:53.193 LINK string_ut 00:42:53.193 LINK nvme_rdma_ut 00:42:53.193 LINK xor_ut 00:42:53.193 LINK ftl_mempool_ut 00:42:53.452 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:42:53.452 LINK ftl_mngt_ut 00:42:53.452 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:42:54.020 LINK ftl_sb_ut 00:42:54.020 LINK ftl_layout_upgrade_ut 00:43:50.246 json_parse_ut.c: In function ‘test_parse_nesting’: 00:43:50.246 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:43:50.246 616 | test_parse_nesting(void) 00:43:50.246 | ^ 00:43:50.246 12:27:47 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:43:50.246 make[1]: Nothing to be done for 'clean'. 00:43:50.246 12:27:51 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:43:50.246 12:27:51 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:43:50.246 12:27:51 -- common/autotest_common.sh@10 -- $ set +x 00:43:50.246 12:27:51 -- spdk/autopackage.sh@48 -- $ timing_finish 00:43:50.246 12:27:51 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:50.246 12:27:51 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:43:50.246 12:27:51 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:50.246 + [[ -n 2280 ]] 00:43:50.246 + sudo kill 2280 00:43:50.253 [Pipeline] } 00:43:50.263 [Pipeline] // timeout 00:43:50.267 [Pipeline] } 00:43:50.276 [Pipeline] // stage 00:43:50.280 [Pipeline] } 00:43:50.289 [Pipeline] // catchError 00:43:50.296 [Pipeline] stage 00:43:50.298 [Pipeline] { (Stop VM) 00:43:50.308 [Pipeline] sh 00:43:50.586 + vagrant halt 00:43:53.869 ==> default: Halting domain... 00:44:32.596 [Pipeline] sh 00:44:32.895 + vagrant destroy -f 00:44:35.471 ==> default: Removing domain... 00:44:36.423 [Pipeline] sh 00:44:36.710 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:44:36.720 [Pipeline] } 00:44:36.739 [Pipeline] // stage 00:44:36.745 [Pipeline] } 00:44:36.763 [Pipeline] // dir 00:44:36.768 [Pipeline] } 00:44:36.786 [Pipeline] // wrap 00:44:36.793 [Pipeline] } 00:44:36.808 [Pipeline] // catchError 00:44:36.819 [Pipeline] stage 00:44:36.822 [Pipeline] { (Epilogue) 00:44:36.838 [Pipeline] sh 00:44:37.126 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:52.026 [Pipeline] catchError 00:44:52.029 [Pipeline] { 00:44:52.044 [Pipeline] sh 00:44:52.324 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:52.583 Artifacts sizes are good 00:44:52.592 [Pipeline] } 00:44:52.606 [Pipeline] // catchError 00:44:52.617 [Pipeline] archiveArtifacts 00:44:52.624 Archiving artifacts 00:44:52.902 [Pipeline] cleanWs 00:44:52.915 [WS-CLEANUP] Deleting project workspace... 00:44:52.915 [WS-CLEANUP] Deferred wipeout is used... 00:44:52.921 [WS-CLEANUP] done 00:44:52.922 [Pipeline] } 00:44:52.937 [Pipeline] // stage 00:44:52.943 [Pipeline] } 00:44:52.956 [Pipeline] // node 00:44:52.962 [Pipeline] End of Pipeline 00:44:53.001 Finished: SUCCESS